00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4079 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3669 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.080 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.132 Using shallow fetch with depth 1 00:00:00.132 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.132 > git --version # timeout=10 00:00:00.158 > git --version # 'git version 2.39.2' 00:00:00.158 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.190 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.190 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.674 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.689 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.702 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.702 > git config core.sparsecheckout # timeout=10 00:00:04.714 > git read-tree -mu HEAD # timeout=10 00:00:04.730 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.756 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.757 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.903 [Pipeline] Start of Pipeline 00:00:04.917 [Pipeline] library 00:00:04.919 Loading library shm_lib@master 00:00:04.919 Library shm_lib@master is cached. Copying from home. 00:00:04.932 [Pipeline] node 00:00:04.953 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.955 [Pipeline] { 00:00:04.965 [Pipeline] catchError 00:00:04.966 [Pipeline] { 00:00:04.980 [Pipeline] wrap 00:00:04.989 [Pipeline] { 00:00:04.999 [Pipeline] stage 00:00:05.001 [Pipeline] { (Prologue) 00:00:05.024 [Pipeline] echo 00:00:05.026 Node: VM-host-WFP7 00:00:05.034 [Pipeline] cleanWs 00:00:05.045 [WS-CLEANUP] Deleting project workspace... 00:00:05.045 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.052 [WS-CLEANUP] done 00:00:05.298 [Pipeline] setCustomBuildProperty 00:00:05.393 [Pipeline] httpRequest 00:00:06.155 [Pipeline] echo 00:00:06.156 Sorcerer 10.211.164.101 is alive 00:00:06.163 [Pipeline] retry 00:00:06.164 [Pipeline] { 00:00:06.173 [Pipeline] httpRequest 00:00:06.177 HttpMethod: GET 00:00:06.178 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.178 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.179 Response Code: HTTP/1.1 200 OK 00:00:06.180 Success: Status code 200 is in the accepted range: 200,404 00:00:06.180 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.843 [Pipeline] } 00:00:06.865 [Pipeline] // retry 00:00:06.873 [Pipeline] sh 00:00:07.157 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.172 [Pipeline] httpRequest 00:00:07.502 [Pipeline] echo 00:00:07.505 Sorcerer 10.211.164.101 is alive 00:00:07.516 [Pipeline] retry 00:00:07.518 [Pipeline] { 00:00:07.526 [Pipeline] httpRequest 00:00:07.530 HttpMethod: GET 00:00:07.531 URL: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:07.531 Sending request to url: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:07.532 Response Code: HTTP/1.1 200 OK 00:00:07.532 Success: Status code 200 is in the accepted range: 200,404 00:00:07.532 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:26.804 [Pipeline] } 00:00:26.822 [Pipeline] // retry 00:00:26.828 [Pipeline] sh 00:00:27.114 + tar --no-same-owner -xf spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:29.672 [Pipeline] sh 00:00:29.959 + git -C spdk log --oneline -n5 00:00:29.959 2a91567e4 CHANGELOG.md: corrected typo 00:00:29.959 6c35d974e lib/nvme: destruct controllers that failed init asynchronously 00:00:29.959 414f91a0c lib/nvmf: Fix double free of connect request 00:00:29.959 d8f6e798d nvme: Fix discovery loop when target has no entry 00:00:29.959 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:00:29.985 [Pipeline] withCredentials 00:00:29.997 > git --version # timeout=10 00:00:30.012 > git --version # 'git version 2.39.2' 00:00:30.031 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.033 [Pipeline] { 00:00:30.045 [Pipeline] retry 00:00:30.048 [Pipeline] { 00:00:30.065 [Pipeline] sh 00:00:30.351 + git ls-remote http://dpdk.org/git/dpdk main 00:00:30.626 [Pipeline] } 00:00:30.649 [Pipeline] // retry 00:00:30.654 [Pipeline] } 00:00:30.674 [Pipeline] // withCredentials 00:00:30.686 [Pipeline] httpRequest 00:00:31.131 [Pipeline] echo 00:00:31.133 Sorcerer 10.211.164.101 is alive 00:00:31.142 [Pipeline] retry 00:00:31.145 [Pipeline] { 00:00:31.159 [Pipeline] httpRequest 00:00:31.165 HttpMethod: GET 00:00:31.165 URL: http://10.211.164.101/packages/dpdk_5744e912341ee26a0dd5b9ec28b16b8a4e45d1bc.tar.gz 00:00:31.166 Sending request to url: http://10.211.164.101/packages/dpdk_5744e912341ee26a0dd5b9ec28b16b8a4e45d1bc.tar.gz 00:00:31.180 Response Code: HTTP/1.1 200 OK 00:00:31.181 Success: Status code 200 is in the accepted range: 200,404 00:00:31.181 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_5744e912341ee26a0dd5b9ec28b16b8a4e45d1bc.tar.gz 00:01:07.009 [Pipeline] } 00:01:07.027 [Pipeline] // retry 00:01:07.035 [Pipeline] sh 00:01:07.321 + tar --no-same-owner -xf dpdk_5744e912341ee26a0dd5b9ec28b16b8a4e45d1bc.tar.gz 00:01:08.789 [Pipeline] sh 00:01:09.074 + git -C dpdk log --oneline -n5 00:01:09.074 5744e91234 ci: remove workaround for ASan in Ubuntu GHA images 00:01:09.074 ef6ed529b2 net/ntnic: fix Toeplitz key and log with mask 00:01:09.074 c4e84cd7f7 net/ntnic: fix log messages 00:01:09.074 de9f35ebf2 net/ntnic: move API header file 00:01:09.074 190e99be4f net/ntnic: add supplementary macros 00:01:09.097 [Pipeline] writeFile 00:01:09.116 [Pipeline] sh 00:01:09.401 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:09.414 [Pipeline] sh 00:01:09.728 + cat autorun-spdk.conf 00:01:09.728 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.728 SPDK_RUN_ASAN=1 00:01:09.728 SPDK_RUN_UBSAN=1 00:01:09.728 SPDK_TEST_RAID=1 00:01:09.728 SPDK_TEST_NATIVE_DPDK=main 00:01:09.728 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:09.728 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.735 RUN_NIGHTLY=1 00:01:09.737 [Pipeline] } 00:01:09.750 [Pipeline] // stage 00:01:09.765 [Pipeline] stage 00:01:09.767 [Pipeline] { (Run VM) 00:01:09.781 [Pipeline] sh 00:01:10.070 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.071 + echo 'Start stage prepare_nvme.sh' 00:01:10.071 Start stage prepare_nvme.sh 00:01:10.071 + [[ -n 4 ]] 00:01:10.071 + disk_prefix=ex4 00:01:10.071 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:10.071 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:10.071 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:10.071 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.071 ++ SPDK_RUN_ASAN=1 00:01:10.071 ++ SPDK_RUN_UBSAN=1 00:01:10.071 ++ SPDK_TEST_RAID=1 00:01:10.071 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:10.071 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:10.071 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.071 ++ RUN_NIGHTLY=1 00:01:10.071 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:10.071 + nvme_files=() 00:01:10.071 + declare -A nvme_files 00:01:10.071 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.071 + nvme_files['nvme.img']=5G 00:01:10.071 + nvme_files['nvme-cmb.img']=5G 00:01:10.071 + nvme_files['nvme-multi0.img']=4G 00:01:10.071 + nvme_files['nvme-multi1.img']=4G 00:01:10.071 + nvme_files['nvme-multi2.img']=4G 00:01:10.071 + nvme_files['nvme-openstack.img']=8G 00:01:10.071 + nvme_files['nvme-zns.img']=5G 00:01:10.071 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.071 + (( SPDK_TEST_FTL == 1 )) 00:01:10.071 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.071 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.071 + for nvme in "${!nvme_files[@]}" 00:01:10.071 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:10.071 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.071 + for nvme in "${!nvme_files[@]}" 00:01:10.071 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:10.071 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.071 + for nvme in "${!nvme_files[@]}" 00:01:10.071 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:10.071 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.071 + for nvme in "${!nvme_files[@]}" 00:01:10.071 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:10.071 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.071 + for nvme in "${!nvme_files[@]}" 00:01:10.071 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:10.071 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.071 + for nvme in "${!nvme_files[@]}" 00:01:10.071 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:10.071 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.071 + for nvme in "${!nvme_files[@]}" 00:01:10.071 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:11.011 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.011 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:11.011 + echo 'End stage prepare_nvme.sh' 00:01:11.011 End stage prepare_nvme.sh 00:01:11.023 [Pipeline] sh 00:01:11.309 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.309 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:11.309 00:01:11.309 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:11.309 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:11.309 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:11.309 HELP=0 00:01:11.309 DRY_RUN=0 00:01:11.309 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:11.309 NVME_DISKS_TYPE=nvme,nvme, 00:01:11.309 NVME_AUTO_CREATE=0 00:01:11.309 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:11.309 NVME_CMB=,, 00:01:11.309 NVME_PMR=,, 00:01:11.309 NVME_ZNS=,, 00:01:11.309 NVME_MS=,, 00:01:11.309 NVME_FDP=,, 00:01:11.309 SPDK_VAGRANT_DISTRO=fedora39 00:01:11.309 SPDK_VAGRANT_VMCPU=10 00:01:11.309 SPDK_VAGRANT_VMRAM=12288 00:01:11.309 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.309 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:11.309 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.309 SPDK_OPENSTACK_NETWORK=0 00:01:11.309 VAGRANT_PACKAGE_BOX=0 00:01:11.309 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.309 FORCE_DISTRO=true 00:01:11.309 VAGRANT_BOX_VERSION= 00:01:11.309 EXTRA_VAGRANTFILES= 00:01:11.309 NIC_MODEL=virtio 00:01:11.309 00:01:11.309 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:11.309 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:13.221 Bringing machine 'default' up with 'libvirt' provider... 00:01:13.791 ==> default: Creating image (snapshot of base box volume). 00:01:13.791 ==> default: Creating domain with the following settings... 00:01:13.791 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732634231_5c1896084e6b7900b78d 00:01:13.791 ==> default: -- Domain type: kvm 00:01:13.791 ==> default: -- Cpus: 10 00:01:13.791 ==> default: -- Feature: acpi 00:01:13.791 ==> default: -- Feature: apic 00:01:13.791 ==> default: -- Feature: pae 00:01:13.791 ==> default: -- Memory: 12288M 00:01:13.791 ==> default: -- Memory Backing: hugepages: 00:01:13.791 ==> default: -- Management MAC: 00:01:13.791 ==> default: -- Loader: 00:01:13.791 ==> default: -- Nvram: 00:01:13.791 ==> default: -- Base box: spdk/fedora39 00:01:13.791 ==> default: -- Storage pool: default 00:01:13.791 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732634231_5c1896084e6b7900b78d.img (20G) 00:01:13.791 ==> default: -- Volume Cache: default 00:01:13.791 ==> default: -- Kernel: 00:01:13.791 ==> default: -- Initrd: 00:01:13.791 ==> default: -- Graphics Type: vnc 00:01:13.791 ==> default: -- Graphics Port: -1 00:01:13.791 ==> default: -- Graphics IP: 127.0.0.1 00:01:13.791 ==> default: -- Graphics Password: Not defined 00:01:13.791 ==> default: -- Video Type: cirrus 00:01:13.791 ==> default: -- Video VRAM: 9216 00:01:13.791 ==> default: -- Sound Type: 00:01:13.791 ==> default: -- Keymap: en-us 00:01:13.791 ==> default: -- TPM Path: 00:01:13.791 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:13.791 ==> default: -- Command line args: 00:01:13.791 ==> default: -> value=-device, 00:01:13.791 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:13.791 ==> default: -> value=-drive, 00:01:13.791 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:13.791 ==> default: -> value=-device, 00:01:13.791 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.791 ==> default: -> value=-device, 00:01:13.791 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:13.791 ==> default: -> value=-drive, 00:01:13.791 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:13.791 ==> default: -> value=-device, 00:01:13.791 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.791 ==> default: -> value=-drive, 00:01:13.791 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:13.791 ==> default: -> value=-device, 00:01:13.791 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.791 ==> default: -> value=-drive, 00:01:13.791 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:13.791 ==> default: -> value=-device, 00:01:13.791 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.052 ==> default: Creating shared folders metadata... 00:01:14.052 ==> default: Starting domain. 00:01:15.960 ==> default: Waiting for domain to get an IP address... 00:01:30.856 ==> default: Waiting for SSH to become available... 00:01:32.238 ==> default: Configuring and enabling network interfaces... 00:01:38.815 default: SSH address: 192.168.121.249:22 00:01:38.815 default: SSH username: vagrant 00:01:38.815 default: SSH auth method: private key 00:01:41.384 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:49.516 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:54.798 ==> default: Mounting SSHFS shared folder... 00:01:57.335 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.335 ==> default: Checking Mount.. 00:01:58.273 ==> default: Folder Successfully Mounted! 00:01:58.273 ==> default: Running provisioner: file... 00:01:59.654 default: ~/.gitconfig => .gitconfig 00:01:59.914 00:01:59.914 SUCCESS! 00:01:59.914 00:01:59.914 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:59.914 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:59.914 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:59.914 00:01:59.924 [Pipeline] } 00:01:59.942 [Pipeline] // stage 00:01:59.953 [Pipeline] dir 00:01:59.954 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:59.955 [Pipeline] { 00:01:59.971 [Pipeline] catchError 00:01:59.973 [Pipeline] { 00:01:59.988 [Pipeline] sh 00:02:00.270 + vagrant ssh-config --host vagrant 00:02:00.270 + sed -ne /^Host/,$p 00:02:00.270 + tee ssh_conf 00:02:02.811 Host vagrant 00:02:02.811 HostName 192.168.121.249 00:02:02.811 User vagrant 00:02:02.811 Port 22 00:02:02.811 UserKnownHostsFile /dev/null 00:02:02.811 StrictHostKeyChecking no 00:02:02.811 PasswordAuthentication no 00:02:02.811 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:02.811 IdentitiesOnly yes 00:02:02.811 LogLevel FATAL 00:02:02.811 ForwardAgent yes 00:02:02.811 ForwardX11 yes 00:02:02.811 00:02:02.827 [Pipeline] withEnv 00:02:02.829 [Pipeline] { 00:02:02.841 [Pipeline] sh 00:02:03.124 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.124 source /etc/os-release 00:02:03.124 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.124 # Minimal, systemd-like check. 00:02:03.124 if [[ -e /.dockerenv ]]; then 00:02:03.124 # Clear garbage from the node's name: 00:02:03.124 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.124 # $HOSTNAME is the actual container id 00:02:03.124 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.124 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.124 # We can assume this is a mount from a host where container is running, 00:02:03.124 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.124 container="$(< /etc/hostname) ($agent)" 00:02:03.124 else 00:02:03.124 # Fallback 00:02:03.124 container=$agent 00:02:03.124 fi 00:02:03.124 fi 00:02:03.124 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.124 00:02:03.395 [Pipeline] } 00:02:03.415 [Pipeline] // withEnv 00:02:03.423 [Pipeline] setCustomBuildProperty 00:02:03.437 [Pipeline] stage 00:02:03.440 [Pipeline] { (Tests) 00:02:03.457 [Pipeline] sh 00:02:03.739 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.014 [Pipeline] sh 00:02:04.302 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.578 [Pipeline] timeout 00:02:04.578 Timeout set to expire in 1 hr 30 min 00:02:04.580 [Pipeline] { 00:02:04.592 [Pipeline] sh 00:02:04.874 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:05.441 HEAD is now at 2a91567e4 CHANGELOG.md: corrected typo 00:02:05.454 [Pipeline] sh 00:02:05.728 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:06.003 [Pipeline] sh 00:02:06.287 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.561 [Pipeline] sh 00:02:06.842 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:07.103 ++ readlink -f spdk_repo 00:02:07.103 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.103 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.103 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.103 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.103 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.103 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.103 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.103 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:07.103 + cd /home/vagrant/spdk_repo 00:02:07.103 + source /etc/os-release 00:02:07.103 ++ NAME='Fedora Linux' 00:02:07.103 ++ VERSION='39 (Cloud Edition)' 00:02:07.103 ++ ID=fedora 00:02:07.103 ++ VERSION_ID=39 00:02:07.103 ++ VERSION_CODENAME= 00:02:07.103 ++ PLATFORM_ID=platform:f39 00:02:07.103 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.103 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.103 ++ LOGO=fedora-logo-icon 00:02:07.103 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.103 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.103 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.103 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.103 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.103 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.103 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.103 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.103 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.103 ++ SUPPORT_END=2024-11-12 00:02:07.103 ++ VARIANT='Cloud Edition' 00:02:07.103 ++ VARIANT_ID=cloud 00:02:07.103 + uname -a 00:02:07.103 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.103 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:07.674 Hugepages 00:02:07.674 node hugesize free / total 00:02:07.674 node0 1048576kB 0 / 0 00:02:07.674 node0 2048kB 0 / 0 00:02:07.674 00:02:07.674 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.674 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.674 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.674 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:07.674 + rm -f /tmp/spdk-ld-path 00:02:07.674 + source autorun-spdk.conf 00:02:07.674 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.674 ++ SPDK_RUN_ASAN=1 00:02:07.674 ++ SPDK_RUN_UBSAN=1 00:02:07.674 ++ SPDK_TEST_RAID=1 00:02:07.674 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:07.674 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.674 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.674 ++ RUN_NIGHTLY=1 00:02:07.674 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.674 + [[ -n '' ]] 00:02:07.674 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:07.674 + for M in /var/spdk/build-*-manifest.txt 00:02:07.674 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.674 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.674 + for M in /var/spdk/build-*-manifest.txt 00:02:07.674 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.674 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.935 + for M in /var/spdk/build-*-manifest.txt 00:02:07.935 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.935 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.935 ++ uname 00:02:07.935 + [[ Linux == \L\i\n\u\x ]] 00:02:07.935 + sudo dmesg -T 00:02:07.935 + sudo dmesg --clear 00:02:07.935 + dmesg_pid=6162 00:02:07.935 + sudo dmesg -Tw 00:02:07.935 + [[ Fedora Linux == FreeBSD ]] 00:02:07.935 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.935 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.935 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.935 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.935 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.935 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.935 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.935 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.935 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.935 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.935 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.935 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.935 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.935 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.935 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.935 15:18:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:07.935 15:18:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.935 15:18:06 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:07.935 15:18:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:07.935 15:18:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.196 15:18:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.196 15:18:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.196 15:18:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.196 15:18:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.196 15:18:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.196 15:18:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.196 15:18:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.196 15:18:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.196 15:18:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.196 15:18:06 -- paths/export.sh@5 -- $ export PATH 00:02:08.196 15:18:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.196 15:18:06 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.196 15:18:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:08.196 15:18:06 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732634286.XXXXXX 00:02:08.196 15:18:06 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732634286.nmW7C8 00:02:08.196 15:18:06 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:08.196 15:18:06 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:08.196 15:18:06 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:08.196 15:18:06 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:08.196 15:18:06 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.196 15:18:06 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.196 15:18:06 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:08.196 15:18:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:08.196 15:18:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.196 15:18:06 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:08.196 15:18:06 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:08.196 15:18:06 -- pm/common@17 -- $ local monitor 00:02:08.196 15:18:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.196 15:18:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.196 15:18:06 -- pm/common@25 -- $ sleep 1 00:02:08.196 15:18:06 -- pm/common@21 -- $ date +%s 00:02:08.196 15:18:06 -- pm/common@21 -- $ date +%s 00:02:08.196 15:18:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732634286 00:02:08.196 15:18:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732634286 00:02:08.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732634286_collect-cpu-load.pm.log 00:02:08.196 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732634286_collect-vmstat.pm.log 00:02:09.138 15:18:07 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:09.138 15:18:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.138 15:18:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.138 15:18:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.138 15:18:07 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.138 Tue Nov 26 03:18:07 PM UTC 2024 00:02:09.138 15:18:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.138 v25.01-pre-240-g2a91567e4 00:02:09.138 15:18:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:09.138 15:18:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:09.138 15:18:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.138 15:18:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.138 15:18:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.138 ************************************ 00:02:09.138 START TEST asan 00:02:09.138 ************************************ 00:02:09.138 using asan 00:02:09.138 15:18:07 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:09.138 00:02:09.138 real 0m0.001s 00:02:09.138 user 0m0.001s 00:02:09.138 sys 0m0.000s 00:02:09.138 15:18:07 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.138 15:18:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.138 ************************************ 00:02:09.138 END TEST asan 00:02:09.138 ************************************ 00:02:09.400 15:18:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.400 15:18:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.400 15:18:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.400 15:18:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.400 15:18:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.400 ************************************ 00:02:09.400 START TEST ubsan 00:02:09.400 ************************************ 00:02:09.400 using ubsan 00:02:09.400 15:18:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:09.400 00:02:09.400 real 0m0.001s 00:02:09.400 user 0m0.000s 00:02:09.400 sys 0m0.001s 00:02:09.400 15:18:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.400 15:18:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.400 ************************************ 00:02:09.400 END TEST ubsan 00:02:09.400 ************************************ 00:02:09.400 15:18:07 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:09.400 15:18:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:09.400 15:18:07 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:09.400 15:18:07 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:09.400 15:18:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.400 15:18:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.400 ************************************ 00:02:09.400 START TEST build_native_dpdk 00:02:09.400 ************************************ 00:02:09.400 15:18:07 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:09.400 5744e91234 ci: remove workaround for ASan in Ubuntu GHA images 00:02:09.400 ef6ed529b2 net/ntnic: fix Toeplitz key and log with mask 00:02:09.400 c4e84cd7f7 net/ntnic: fix log messages 00:02:09.400 de9f35ebf2 net/ntnic: move API header file 00:02:09.400 190e99be4f net/ntnic: add supplementary macros 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc3 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:09.400 15:18:07 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc3 21.11.0 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 21.11.0 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.400 15:18:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:09.401 15:18:07 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:09.401 patching file config/rte_config.h 00:02:09.401 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:09.401 15:18:07 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc3 24.07.0 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 24.07.0 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:09.401 15:18:07 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc3 24.07.0 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc3 '>=' 24.07.0 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.401 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:09.402 15:18:07 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:09.402 15:18:07 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:09.661 patching file drivers/bus/pci/linux/pci_uio.c 00:02:09.661 15:18:07 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:09.661 15:18:07 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:09.661 15:18:07 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:09.661 15:18:07 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:09.661 15:18:07 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:16.254 The Meson build system 00:02:16.254 Version: 1.5.0 00:02:16.254 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:16.254 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:16.254 Build type: native build 00:02:16.254 Project name: DPDK 00:02:16.254 Project version: 24.11.0-rc3 00:02:16.254 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.254 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:16.254 Host machine cpu family: x86_64 00:02:16.254 Host machine cpu: x86_64 00:02:16.254 Message: ## Building in Developer Mode ## 00:02:16.254 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:16.254 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:16.254 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:16.254 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:16.254 Program cat found: YES (/usr/bin/cat) 00:02:16.254 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:16.254 Compiler for C supports arguments -march=native: YES 00:02:16.254 Checking for size of "void *" : 8 00:02:16.254 Checking for size of "void *" : 8 (cached) 00:02:16.254 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:16.254 Library m found: YES 00:02:16.254 Library numa found: YES 00:02:16.254 Has header "numaif.h" : YES 00:02:16.254 Library fdt found: NO 00:02:16.254 Library execinfo found: NO 00:02:16.254 Has header "execinfo.h" : YES 00:02:16.254 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.254 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:16.254 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:16.254 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:16.254 Run-time dependency openssl found: YES 3.1.1 00:02:16.254 Run-time dependency libpcap found: YES 1.10.4 00:02:16.254 Has header "pcap.h" with dependency libpcap: YES 00:02:16.254 Compiler for C supports arguments -Wcast-qual: YES 00:02:16.254 Compiler for C supports arguments -Wdeprecated: YES 00:02:16.254 Compiler for C supports arguments -Wformat: YES 00:02:16.254 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:16.254 Compiler for C supports arguments -Wformat-security: NO 00:02:16.254 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.254 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:16.254 Compiler for C supports arguments -Wnested-externs: YES 00:02:16.254 Compiler for C supports arguments -Wold-style-definition: YES 00:02:16.254 Compiler for C supports arguments -Wpointer-arith: YES 00:02:16.254 Compiler for C supports arguments -Wsign-compare: YES 00:02:16.254 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:16.254 Compiler for C supports arguments -Wundef: YES 00:02:16.254 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.254 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:16.254 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.254 Program objdump found: YES (/usr/bin/objdump) 00:02:16.254 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:16.254 Checking if "AVX512 checking" compiles: YES 00:02:16.254 Fetching value of define "__AVX512F__" : 1 00:02:16.254 Fetching value of define "__AVX512BW__" : 1 00:02:16.254 Fetching value of define "__AVX512DQ__" : 1 00:02:16.254 Fetching value of define "__AVX512VL__" : 1 00:02:16.254 Fetching value of define "__SSE4_2__" : 1 00:02:16.254 Fetching value of define "__AES__" : 1 00:02:16.254 Fetching value of define "__AVX__" : 1 00:02:16.254 Fetching value of define "__AVX2__" : 1 00:02:16.254 Fetching value of define "__AVX512BW__" : 1 00:02:16.254 Fetching value of define "__AVX512CD__" : 1 00:02:16.254 Fetching value of define "__AVX512DQ__" : 1 00:02:16.254 Fetching value of define "__AVX512F__" : 1 00:02:16.254 Fetching value of define "__AVX512VL__" : 1 00:02:16.254 Fetching value of define "__PCLMUL__" : 1 00:02:16.254 Fetching value of define "__RDRND__" : 1 00:02:16.254 Fetching value of define "__RDSEED__" : 1 00:02:16.254 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:16.254 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:16.254 Message: lib/log: Defining dependency "log" 00:02:16.254 Message: lib/kvargs: Defining dependency "kvargs" 00:02:16.254 Message: lib/argparse: Defining dependency "argparse" 00:02:16.254 Message: lib/telemetry: Defining dependency "telemetry" 00:02:16.254 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:16.254 Checking for function "getentropy" : NO 00:02:16.254 Message: lib/eal: Defining dependency "eal" 00:02:16.254 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:16.254 Message: lib/ring: Defining dependency "ring" 00:02:16.254 Message: lib/rcu: Defining dependency "rcu" 00:02:16.254 Message: lib/mempool: Defining dependency "mempool" 00:02:16.254 Message: lib/mbuf: Defining dependency "mbuf" 00:02:16.254 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:16.254 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:16.254 Compiler for C supports arguments -mpclmul: YES 00:02:16.254 Compiler for C supports arguments -maes: YES 00:02:16.254 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:16.254 Message: lib/net: Defining dependency "net" 00:02:16.254 Message: lib/meter: Defining dependency "meter" 00:02:16.254 Message: lib/ethdev: Defining dependency "ethdev" 00:02:16.254 Message: lib/pci: Defining dependency "pci" 00:02:16.254 Message: lib/cmdline: Defining dependency "cmdline" 00:02:16.254 Message: lib/metrics: Defining dependency "metrics" 00:02:16.254 Message: lib/hash: Defining dependency "hash" 00:02:16.254 Message: lib/timer: Defining dependency "timer" 00:02:16.254 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:16.254 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:16.254 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:16.254 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:16.254 Message: lib/acl: Defining dependency "acl" 00:02:16.254 Message: lib/bbdev: Defining dependency "bbdev" 00:02:16.254 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:16.254 Run-time dependency libelf found: YES 0.191 00:02:16.254 Message: lib/bpf: Defining dependency "bpf" 00:02:16.254 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:16.254 Message: lib/compressdev: Defining dependency "compressdev" 00:02:16.254 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:16.254 Message: lib/distributor: Defining dependency "distributor" 00:02:16.254 Message: lib/dmadev: Defining dependency "dmadev" 00:02:16.254 Message: lib/efd: Defining dependency "efd" 00:02:16.254 Message: lib/eventdev: Defining dependency "eventdev" 00:02:16.254 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:16.254 Message: lib/gpudev: Defining dependency "gpudev" 00:02:16.254 Message: lib/gro: Defining dependency "gro" 00:02:16.254 Message: lib/gso: Defining dependency "gso" 00:02:16.254 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:16.254 Message: lib/jobstats: Defining dependency "jobstats" 00:02:16.254 Message: lib/latencystats: Defining dependency "latencystats" 00:02:16.254 Message: lib/lpm: Defining dependency "lpm" 00:02:16.254 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:16.254 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:16.254 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:16.254 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:16.254 Message: lib/member: Defining dependency "member" 00:02:16.255 Message: lib/pcapng: Defining dependency "pcapng" 00:02:16.255 Message: lib/power: Defining dependency "power" 00:02:16.255 Message: lib/rawdev: Defining dependency "rawdev" 00:02:16.255 Message: lib/regexdev: Defining dependency "regexdev" 00:02:16.255 Message: lib/mldev: Defining dependency "mldev" 00:02:16.255 Message: lib/rib: Defining dependency "rib" 00:02:16.255 Message: lib/reorder: Defining dependency "reorder" 00:02:16.255 Message: lib/sched: Defining dependency "sched" 00:02:16.255 Message: lib/security: Defining dependency "security" 00:02:16.255 Message: lib/stack: Defining dependency "stack" 00:02:16.255 Has header "linux/userfaultfd.h" : YES 00:02:16.255 Has header "linux/vduse.h" : YES 00:02:16.255 Message: lib/vhost: Defining dependency "vhost" 00:02:16.255 Message: lib/ipsec: Defining dependency "ipsec" 00:02:16.255 Message: lib/pdcp: Defining dependency "pdcp" 00:02:16.255 Message: lib/fib: Defining dependency "fib" 00:02:16.255 Message: lib/port: Defining dependency "port" 00:02:16.255 Message: lib/pdump: Defining dependency "pdump" 00:02:16.255 Message: lib/table: Defining dependency "table" 00:02:16.255 Message: lib/pipeline: Defining dependency "pipeline" 00:02:16.255 Message: lib/graph: Defining dependency "graph" 00:02:16.255 Message: lib/node: Defining dependency "node" 00:02:16.255 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:16.255 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:16.255 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:16.255 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:16.255 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:16.255 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:16.255 Compiler for C supports arguments -Wno-unused-value: YES 00:02:16.255 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:16.255 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:16.255 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:16.255 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:16.255 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:16.255 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:16.255 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:16.255 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:16.255 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:16.255 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:16.255 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:16.255 Has header "sys/epoll.h" : YES 00:02:16.255 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:16.255 Configuring doxy-api-html.conf using configuration 00:02:16.255 Configuring doxy-api-man.conf using configuration 00:02:16.255 Program mandb found: YES (/usr/bin/mandb) 00:02:16.255 Program sphinx-build found: NO 00:02:16.255 Program sphinx-build found: NO 00:02:16.255 Configuring rte_build_config.h using configuration 00:02:16.255 Message: 00:02:16.255 ================= 00:02:16.255 Applications Enabled 00:02:16.255 ================= 00:02:16.255 00:02:16.255 apps: 00:02:16.255 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:16.255 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:16.255 test-pmd, test-regex, test-sad, test-security-perf, 00:02:16.255 00:02:16.255 Message: 00:02:16.255 ================= 00:02:16.255 Libraries Enabled 00:02:16.255 ================= 00:02:16.255 00:02:16.255 libs: 00:02:16.255 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:16.255 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:16.255 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:16.255 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:16.255 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:16.255 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:16.255 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:16.255 graph, node, 00:02:16.255 00:02:16.255 Message: 00:02:16.255 =============== 00:02:16.255 Drivers Enabled 00:02:16.255 =============== 00:02:16.255 00:02:16.255 common: 00:02:16.255 00:02:16.255 bus: 00:02:16.255 pci, vdev, 00:02:16.255 mempool: 00:02:16.255 ring, 00:02:16.255 dma: 00:02:16.255 00:02:16.255 net: 00:02:16.255 i40e, 00:02:16.255 raw: 00:02:16.255 00:02:16.255 crypto: 00:02:16.255 00:02:16.255 compress: 00:02:16.255 00:02:16.255 regex: 00:02:16.255 00:02:16.255 ml: 00:02:16.255 00:02:16.255 vdpa: 00:02:16.255 00:02:16.255 event: 00:02:16.255 00:02:16.255 baseband: 00:02:16.255 00:02:16.255 gpu: 00:02:16.255 00:02:16.255 power: 00:02:16.255 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:16.255 00:02:16.255 Message: 00:02:16.255 ================= 00:02:16.255 Content Skipped 00:02:16.255 ================= 00:02:16.255 00:02:16.255 apps: 00:02:16.255 00:02:16.255 libs: 00:02:16.255 00:02:16.255 drivers: 00:02:16.255 common/cpt: not in enabled drivers build config 00:02:16.255 common/dpaax: not in enabled drivers build config 00:02:16.255 common/iavf: not in enabled drivers build config 00:02:16.255 common/idpf: not in enabled drivers build config 00:02:16.255 common/ionic: not in enabled drivers build config 00:02:16.255 common/mvep: not in enabled drivers build config 00:02:16.255 common/octeontx: not in enabled drivers build config 00:02:16.255 bus/auxiliary: not in enabled drivers build config 00:02:16.255 bus/cdx: not in enabled drivers build config 00:02:16.255 bus/dpaa: not in enabled drivers build config 00:02:16.255 bus/fslmc: not in enabled drivers build config 00:02:16.255 bus/ifpga: not in enabled drivers build config 00:02:16.255 bus/platform: not in enabled drivers build config 00:02:16.255 bus/uacce: not in enabled drivers build config 00:02:16.255 bus/vmbus: not in enabled drivers build config 00:02:16.255 common/cnxk: not in enabled drivers build config 00:02:16.255 common/mlx5: not in enabled drivers build config 00:02:16.255 common/nfp: not in enabled drivers build config 00:02:16.255 common/nitrox: not in enabled drivers build config 00:02:16.255 common/qat: not in enabled drivers build config 00:02:16.255 common/sfc_efx: not in enabled drivers build config 00:02:16.255 mempool/bucket: not in enabled drivers build config 00:02:16.255 mempool/cnxk: not in enabled drivers build config 00:02:16.255 mempool/dpaa: not in enabled drivers build config 00:02:16.255 mempool/dpaa2: not in enabled drivers build config 00:02:16.255 mempool/octeontx: not in enabled drivers build config 00:02:16.255 mempool/stack: not in enabled drivers build config 00:02:16.255 dma/cnxk: not in enabled drivers build config 00:02:16.255 dma/dpaa: not in enabled drivers build config 00:02:16.255 dma/dpaa2: not in enabled drivers build config 00:02:16.255 dma/hisilicon: not in enabled drivers build config 00:02:16.255 dma/idxd: not in enabled drivers build config 00:02:16.255 dma/ioat: not in enabled drivers build config 00:02:16.255 dma/odm: not in enabled drivers build config 00:02:16.255 dma/skeleton: not in enabled drivers build config 00:02:16.255 net/af_packet: not in enabled drivers build config 00:02:16.255 net/af_xdp: not in enabled drivers build config 00:02:16.255 net/ark: not in enabled drivers build config 00:02:16.255 net/atlantic: not in enabled drivers build config 00:02:16.255 net/avp: not in enabled drivers build config 00:02:16.255 net/axgbe: not in enabled drivers build config 00:02:16.255 net/bnx2x: not in enabled drivers build config 00:02:16.255 net/bnxt: not in enabled drivers build config 00:02:16.255 net/bonding: not in enabled drivers build config 00:02:16.255 net/cnxk: not in enabled drivers build config 00:02:16.255 net/cpfl: not in enabled drivers build config 00:02:16.255 net/cxgbe: not in enabled drivers build config 00:02:16.255 net/dpaa: not in enabled drivers build config 00:02:16.255 net/dpaa2: not in enabled drivers build config 00:02:16.255 net/e1000: not in enabled drivers build config 00:02:16.255 net/ena: not in enabled drivers build config 00:02:16.255 net/enetc: not in enabled drivers build config 00:02:16.255 net/enetfec: not in enabled drivers build config 00:02:16.255 net/enic: not in enabled drivers build config 00:02:16.255 net/failsafe: not in enabled drivers build config 00:02:16.255 net/fm10k: not in enabled drivers build config 00:02:16.255 net/gve: not in enabled drivers build config 00:02:16.255 net/hinic: not in enabled drivers build config 00:02:16.255 net/hns3: not in enabled drivers build config 00:02:16.255 net/iavf: not in enabled drivers build config 00:02:16.255 net/ice: not in enabled drivers build config 00:02:16.255 net/idpf: not in enabled drivers build config 00:02:16.255 net/igc: not in enabled drivers build config 00:02:16.255 net/ionic: not in enabled drivers build config 00:02:16.255 net/ipn3ke: not in enabled drivers build config 00:02:16.255 net/ixgbe: not in enabled drivers build config 00:02:16.256 net/mana: not in enabled drivers build config 00:02:16.256 net/memif: not in enabled drivers build config 00:02:16.256 net/mlx4: not in enabled drivers build config 00:02:16.256 net/mlx5: not in enabled drivers build config 00:02:16.256 net/mvneta: not in enabled drivers build config 00:02:16.256 net/mvpp2: not in enabled drivers build config 00:02:16.256 net/netvsc: not in enabled drivers build config 00:02:16.256 net/nfb: not in enabled drivers build config 00:02:16.256 net/nfp: not in enabled drivers build config 00:02:16.256 net/ngbe: not in enabled drivers build config 00:02:16.256 net/ntnic: not in enabled drivers build config 00:02:16.256 net/null: not in enabled drivers build config 00:02:16.256 net/octeontx: not in enabled drivers build config 00:02:16.256 net/octeon_ep: not in enabled drivers build config 00:02:16.256 net/pcap: not in enabled drivers build config 00:02:16.256 net/pfe: not in enabled drivers build config 00:02:16.256 net/qede: not in enabled drivers build config 00:02:16.256 net/r8169: not in enabled drivers build config 00:02:16.256 net/ring: not in enabled drivers build config 00:02:16.256 net/sfc: not in enabled drivers build config 00:02:16.256 net/softnic: not in enabled drivers build config 00:02:16.256 net/tap: not in enabled drivers build config 00:02:16.256 net/thunderx: not in enabled drivers build config 00:02:16.256 net/txgbe: not in enabled drivers build config 00:02:16.256 net/vdev_netvsc: not in enabled drivers build config 00:02:16.256 net/vhost: not in enabled drivers build config 00:02:16.256 net/virtio: not in enabled drivers build config 00:02:16.256 net/vmxnet3: not in enabled drivers build config 00:02:16.256 net/zxdh: not in enabled drivers build config 00:02:16.256 raw/cnxk_bphy: not in enabled drivers build config 00:02:16.256 raw/cnxk_gpio: not in enabled drivers build config 00:02:16.256 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:16.256 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:16.256 raw/gdtc: not in enabled drivers build config 00:02:16.256 raw/ifpga: not in enabled drivers build config 00:02:16.256 raw/ntb: not in enabled drivers build config 00:02:16.256 raw/skeleton: not in enabled drivers build config 00:02:16.256 crypto/armv8: not in enabled drivers build config 00:02:16.256 crypto/bcmfs: not in enabled drivers build config 00:02:16.256 crypto/caam_jr: not in enabled drivers build config 00:02:16.256 crypto/ccp: not in enabled drivers build config 00:02:16.256 crypto/cnxk: not in enabled drivers build config 00:02:16.256 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.256 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.256 crypto/ionic: not in enabled drivers build config 00:02:16.256 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.256 crypto/mlx5: not in enabled drivers build config 00:02:16.256 crypto/mvsam: not in enabled drivers build config 00:02:16.256 crypto/nitrox: not in enabled drivers build config 00:02:16.256 crypto/null: not in enabled drivers build config 00:02:16.256 crypto/octeontx: not in enabled drivers build config 00:02:16.256 crypto/openssl: not in enabled drivers build config 00:02:16.256 crypto/scheduler: not in enabled drivers build config 00:02:16.256 crypto/uadk: not in enabled drivers build config 00:02:16.256 crypto/virtio: not in enabled drivers build config 00:02:16.256 compress/isal: not in enabled drivers build config 00:02:16.256 compress/mlx5: not in enabled drivers build config 00:02:16.256 compress/nitrox: not in enabled drivers build config 00:02:16.256 compress/octeontx: not in enabled drivers build config 00:02:16.256 compress/uadk: not in enabled drivers build config 00:02:16.256 compress/zlib: not in enabled drivers build config 00:02:16.256 regex/mlx5: not in enabled drivers build config 00:02:16.256 regex/cn9k: not in enabled drivers build config 00:02:16.256 ml/cnxk: not in enabled drivers build config 00:02:16.256 vdpa/ifc: not in enabled drivers build config 00:02:16.256 vdpa/mlx5: not in enabled drivers build config 00:02:16.256 vdpa/nfp: not in enabled drivers build config 00:02:16.256 vdpa/sfc: not in enabled drivers build config 00:02:16.256 event/cnxk: not in enabled drivers build config 00:02:16.256 event/dlb2: not in enabled drivers build config 00:02:16.256 event/dpaa: not in enabled drivers build config 00:02:16.256 event/dpaa2: not in enabled drivers build config 00:02:16.256 event/dsw: not in enabled drivers build config 00:02:16.256 event/opdl: not in enabled drivers build config 00:02:16.256 event/skeleton: not in enabled drivers build config 00:02:16.256 event/sw: not in enabled drivers build config 00:02:16.256 event/octeontx: not in enabled drivers build config 00:02:16.256 baseband/acc: not in enabled drivers build config 00:02:16.256 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:16.256 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:16.256 baseband/la12xx: not in enabled drivers build config 00:02:16.256 baseband/null: not in enabled drivers build config 00:02:16.256 baseband/turbo_sw: not in enabled drivers build config 00:02:16.256 gpu/cuda: not in enabled drivers build config 00:02:16.256 power/amd_uncore: not in enabled drivers build config 00:02:16.256 00:02:16.256 00:02:16.256 Message: DPDK build config complete: 00:02:16.256 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:16.256 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:16.256 Build targets in project: 246 00:02:16.256 00:02:16.256 DPDK 24.11.0-rc3 00:02:16.256 00:02:16.256 User defined options 00:02:16.256 libdir : lib 00:02:16.256 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:16.256 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:16.256 c_link_args : 00:02:16.256 enable_docs : false 00:02:16.256 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:16.256 enable_kmods : false 00:02:16.824 machine : native 00:02:16.825 tests : false 00:02:16.825 00:02:16.825 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.825 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:17.084 15:18:15 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:17.084 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.084 [1/766] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:17.084 [2/766] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:17.084 [3/766] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:17.084 [4/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.084 [5/766] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:17.344 [6/766] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.344 [7/766] Linking static target lib/librte_kvargs.a 00:02:17.344 [8/766] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.344 [9/766] Linking static target lib/librte_log.a 00:02:17.344 [10/766] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:17.344 [11/766] Linking static target lib/librte_argparse.a 00:02:17.344 [12/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.344 [13/766] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.604 [14/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.604 [15/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.604 [16/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.604 [17/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.604 [18/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.604 [19/766] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.604 [20/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.604 [21/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.604 [22/766] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.604 [23/766] Linking target lib/librte_log.so.25.0 00:02:17.863 [24/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:17.863 [25/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.863 [26/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.863 [27/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.863 [28/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.863 [29/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.123 [30/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.123 [31/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.123 [32/766] Linking static target lib/librte_telemetry.a 00:02:18.123 [33/766] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:18.123 [34/766] Linking target lib/librte_kvargs.so.25.0 00:02:18.123 [35/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.123 [36/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.123 [37/766] Linking target lib/librte_argparse.so.25.0 00:02:18.123 [38/766] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:18.123 [39/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.383 [40/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.383 [41/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:18.383 [42/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:18.383 [43/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:18.383 [44/766] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.383 [45/766] Linking target lib/librte_telemetry.so.25.0 00:02:18.383 [46/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:18.383 [47/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.642 [48/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:18.642 [49/766] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:18.642 [50/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:18.642 [51/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:18.642 [52/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:18.642 [53/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:18.642 [54/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:18.642 [55/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.901 [56/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.901 [57/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.901 [58/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.901 [59/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.901 [60/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.901 [61/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.901 [62/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.901 [63/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.159 [64/766] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.159 [65/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:19.159 [66/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:19.159 [67/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.160 [68/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:19.160 [69/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.160 [70/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.160 [71/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.418 [72/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:19.418 [73/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:19.419 [74/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.419 [75/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:19.419 [76/766] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:19.419 [77/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.677 [78/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.677 [79/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.677 [80/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.677 [81/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.677 [82/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.677 [83/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.677 [84/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.677 [85/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.936 [86/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.936 [87/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.936 [88/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.936 [89/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.936 [90/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:19.936 [91/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.936 [92/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:19.936 [93/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.195 [94/766] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:20.195 [95/766] Linking static target lib/librte_ring.a 00:02:20.195 [96/766] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.195 [97/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.195 [98/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:20.453 [99/766] Linking static target lib/librte_eal.a 00:02:20.453 [100/766] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.453 [101/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.453 [102/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.453 [103/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.713 [104/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.713 [105/766] Linking static target lib/librte_mempool.a 00:02:20.713 [106/766] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:20.713 [107/766] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:20.713 [108/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:20.713 [109/766] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:20.713 [110/766] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:20.713 [111/766] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:20.713 [112/766] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.713 [113/766] Linking static target lib/librte_rcu.a 00:02:20.970 [114/766] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:20.970 [115/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:20.970 [116/766] Linking static target lib/librte_mbuf.a 00:02:20.970 [117/766] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:20.970 [118/766] Linking static target lib/librte_net.a 00:02:20.970 [119/766] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.229 [120/766] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.229 [121/766] Linking static target lib/librte_meter.a 00:02:21.229 [122/766] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.229 [123/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.229 [124/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.229 [125/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.229 [126/766] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.488 [127/766] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.488 [128/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.488 [129/766] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.748 [130/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.748 [131/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.007 [132/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.008 [133/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.008 [134/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.267 [135/766] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.267 [136/766] Linking static target lib/librte_pci.a 00:02:22.267 [137/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.267 [138/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.267 [139/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.267 [140/766] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.527 [141/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.527 [142/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.527 [143/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:22.527 [144/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.527 [145/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.527 [146/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.527 [147/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.527 [148/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.527 [149/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.527 [150/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.527 [151/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.527 [152/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.786 [153/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.786 [154/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.786 [155/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.786 [156/766] Linking static target lib/librte_cmdline.a 00:02:22.786 [157/766] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:23.046 [158/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:23.046 [159/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:23.046 [160/766] Linking static target lib/librte_metrics.a 00:02:23.046 [161/766] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.307 [162/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:23.307 [163/766] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.307 [164/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.307 [165/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:02:23.608 [166/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.608 [167/766] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.608 [168/766] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:23.608 [169/766] Linking static target lib/librte_timer.a 00:02:23.868 [170/766] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:23.868 [171/766] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:23.868 [172/766] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:23.868 [173/766] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.128 [174/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:24.388 [175/766] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.388 [176/766] Linking static target lib/librte_bitratestats.a 00:02:24.388 [177/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.388 [178/766] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.647 [179/766] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:24.647 [180/766] Linking static target lib/librte_bbdev.a 00:02:24.647 [181/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:24.647 [182/766] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.647 [183/766] Linking static target lib/librte_hash.a 00:02:24.905 [184/766] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:24.905 [185/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.905 [186/766] Linking static target lib/librte_ethdev.a 00:02:24.905 [187/766] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.905 [188/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:25.164 [189/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:25.164 [190/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:25.423 [191/766] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.423 [192/766] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.423 [193/766] Linking target lib/librte_eal.so.25.0 00:02:25.423 [194/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:25.682 [195/766] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:25.682 [196/766] Linking target lib/librte_ring.so.25.0 00:02:25.682 [197/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:25.682 [198/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:25.682 [199/766] Linking target lib/librte_meter.so.25.0 00:02:25.682 [200/766] Linking target lib/librte_pci.so.25.0 00:02:25.682 [201/766] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:25.682 [202/766] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:25.682 [203/766] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:25.682 [204/766] Linking target lib/librte_rcu.so.25.0 00:02:25.682 [205/766] Linking target lib/librte_mempool.so.25.0 00:02:25.682 [206/766] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:25.682 [207/766] Linking static target lib/acl/libavx2_tmp.a 00:02:25.682 [208/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:25.682 [209/766] Linking static target lib/librte_cfgfile.a 00:02:25.682 [210/766] Linking target lib/librte_timer.so.25.0 00:02:25.682 [211/766] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:25.942 [212/766] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:25.942 [213/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.942 [214/766] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:25.942 [215/766] Linking target lib/librte_mbuf.so.25.0 00:02:25.942 [216/766] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:25.942 [217/766] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:25.942 [218/766] Linking target lib/librte_net.so.25.0 00:02:25.942 [219/766] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.201 [220/766] Linking target lib/librte_bbdev.so.25.0 00:02:26.201 [221/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:26.201 [222/766] Linking static target lib/librte_bpf.a 00:02:26.201 [223/766] Linking target lib/librte_cfgfile.so.25.0 00:02:26.201 [224/766] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:26.201 [225/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:26.201 [226/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:26.201 [227/766] Linking target lib/librte_cmdline.so.25.0 00:02:26.201 [228/766] Linking target lib/librte_hash.so.25.0 00:02:26.201 [229/766] Linking static target lib/librte_acl.a 00:02:26.201 [230/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:26.201 [231/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:26.201 [232/766] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:26.460 [233/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:26.460 [234/766] Linking static target lib/librte_compressdev.a 00:02:26.460 [235/766] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.460 [236/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:26.460 [237/766] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.460 [238/766] Linking target lib/librte_acl.so.25.0 00:02:26.719 [239/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:26.719 [240/766] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:26.719 [241/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:26.719 [242/766] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.719 [243/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:26.719 [244/766] Linking target lib/librte_compressdev.so.25.0 00:02:26.979 [245/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:26.979 [246/766] Linking static target lib/librte_distributor.a 00:02:26.979 [247/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:26.979 [248/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.979 [249/766] Linking static target lib/librte_dmadev.a 00:02:27.240 [250/766] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.240 [251/766] Linking target lib/librte_distributor.so.25.0 00:02:27.240 [252/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:27.499 [253/766] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.499 [254/766] Linking target lib/librte_dmadev.so.25.0 00:02:27.499 [255/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:27.499 [256/766] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:27.499 [257/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:27.760 [258/766] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:27.760 [259/766] Linking static target lib/librte_efd.a 00:02:27.760 [260/766] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.760 [261/766] Linking target lib/librte_efd.so.25.0 00:02:28.020 [262/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:28.020 [263/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.020 [264/766] Linking static target lib/librte_cryptodev.a 00:02:28.020 [265/766] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:28.020 [266/766] Linking static target lib/librte_dispatcher.a 00:02:28.279 [267/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:28.279 [268/766] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:28.279 [269/766] Linking static target lib/librte_gpudev.a 00:02:28.538 [270/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:28.538 [271/766] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:28.538 [272/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:28.538 [273/766] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.539 [274/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:28.798 [275/766] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.798 [276/766] Linking target lib/librte_gpudev.so.25.0 00:02:28.798 [277/766] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:29.058 [278/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:29.058 [279/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:29.058 [280/766] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:29.058 [281/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:29.058 [282/766] Linking static target lib/librte_gro.a 00:02:29.058 [283/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:29.058 [284/766] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.058 [285/766] Linking static target lib/librte_eventdev.a 00:02:29.058 [286/766] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:29.058 [287/766] Linking target lib/librte_cryptodev.so.25.0 00:02:29.318 [288/766] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.318 [289/766] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:29.318 [290/766] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:29.318 [291/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:29.318 [292/766] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.318 [293/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:29.318 [294/766] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:29.318 [295/766] Linking target lib/librte_ethdev.so.25.0 00:02:29.318 [296/766] Linking static target lib/librte_gso.a 00:02:29.577 [297/766] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:29.577 [298/766] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.577 [299/766] Linking target lib/librte_metrics.so.25.0 00:02:29.577 [300/766] Linking target lib/librte_bpf.so.25.0 00:02:29.577 [301/766] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:29.577 [302/766] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:29.577 [303/766] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:29.577 [304/766] Linking target lib/librte_bitratestats.so.25.0 00:02:29.577 [305/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:29.577 [306/766] Linking static target lib/librte_jobstats.a 00:02:29.577 [307/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:29.577 [308/766] Linking target lib/librte_gro.so.25.0 00:02:29.577 [309/766] Linking target lib/librte_gso.so.25.0 00:02:29.577 [310/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:29.837 [311/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:29.837 [312/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:29.837 [313/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:29.837 [314/766] Linking static target lib/librte_ip_frag.a 00:02:29.837 [315/766] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.097 [316/766] Linking target lib/librte_jobstats.so.25.0 00:02:30.097 [317/766] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:30.097 [318/766] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.097 [319/766] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:30.097 [320/766] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:30.097 [321/766] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:30.097 [322/766] Linking static target lib/librte_latencystats.a 00:02:30.097 [323/766] Linking target lib/librte_ip_frag.so.25.0 00:02:30.097 [324/766] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.356 [325/766] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:30.357 [326/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:30.357 [327/766] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.357 [328/766] Linking target lib/librte_latencystats.so.25.0 00:02:30.357 [329/766] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:02:30.357 [330/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:30.357 [331/766] Linking static target lib/librte_lpm.a 00:02:30.616 [332/766] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:02:30.616 [333/766] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:30.616 [334/766] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.616 [335/766] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.616 [336/766] Linking static target lib/librte_power.a 00:02:30.875 [337/766] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.875 [338/766] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:30.875 [339/766] Linking target lib/librte_lpm.so.25.0 00:02:30.875 [340/766] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:30.875 [341/766] Linking static target lib/librte_pcapng.a 00:02:30.875 [342/766] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:02:30.875 [343/766] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.875 [344/766] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:30.875 [345/766] Linking static target lib/librte_rawdev.a 00:02:30.875 [346/766] Linking target lib/librte_eventdev.so.25.0 00:02:30.875 [347/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:30.875 [348/766] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:30.875 [349/766] Linking static target lib/librte_regexdev.a 00:02:31.135 [350/766] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.135 [351/766] Linking target lib/librte_pcapng.so.25.0 00:02:31.135 [352/766] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:02:31.135 [353/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:31.135 [354/766] Linking target lib/librte_dispatcher.so.25.0 00:02:31.135 [355/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:31.135 [356/766] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:02:31.394 [357/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:31.394 [358/766] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.394 [359/766] Linking target lib/librte_rawdev.so.25.0 00:02:31.394 [360/766] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:31.394 [361/766] Linking static target lib/librte_member.a 00:02:31.394 [362/766] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.395 [363/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:31.395 [364/766] Linking static target lib/librte_mldev.a 00:02:31.395 [365/766] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:31.395 [366/766] Linking target lib/librte_power.so.25.0 00:02:31.655 [367/766] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.655 [368/766] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:02:31.655 [369/766] Linking target lib/librte_regexdev.so.25.0 00:02:31.655 [370/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:31.655 [371/766] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:31.655 [372/766] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:31.655 [373/766] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.655 [374/766] Linking static target lib/librte_reorder.a 00:02:31.655 [375/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:31.655 [376/766] Linking target lib/librte_member.so.25.0 00:02:31.655 [377/766] Linking static target lib/librte_rib.a 00:02:31.924 [378/766] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:31.924 [379/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:31.924 [380/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:31.924 [381/766] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.924 [382/766] Linking target lib/librte_reorder.so.25.0 00:02:31.924 [383/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:32.202 [384/766] Linking static target lib/librte_stack.a 00:02:32.202 [385/766] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:02:32.202 [386/766] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:32.202 [387/766] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:32.202 [388/766] Linking static target lib/librte_security.a 00:02:32.202 [389/766] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.202 [390/766] Linking target lib/librte_rib.so.25.0 00:02:32.202 [391/766] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.202 [392/766] Linking target lib/librte_stack.so.25.0 00:02:32.202 [393/766] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.461 [394/766] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:02:32.461 [395/766] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.461 [396/766] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.461 [397/766] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:32.461 [398/766] Linking static target lib/librte_sched.a 00:02:32.461 [399/766] Linking target lib/librte_security.so.25.0 00:02:32.720 [400/766] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:02:32.720 [401/766] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.720 [402/766] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.720 [403/766] Linking target lib/librte_mldev.so.25.0 00:02:32.720 [404/766] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.720 [405/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.980 [406/766] Linking target lib/librte_sched.so.25.0 00:02:32.980 [407/766] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:32.980 [408/766] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:33.239 [409/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:33.239 [410/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:33.239 [411/766] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:33.499 [412/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:33.499 [413/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:33.499 [414/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:33.499 [415/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:33.758 [416/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:33.758 [417/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:33.758 [418/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:34.018 [419/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:34.018 [420/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:34.018 [421/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:34.018 [422/766] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:34.278 [423/766] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:34.278 [424/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:34.278 [425/766] Linking static target lib/librte_ipsec.a 00:02:34.536 [426/766] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.536 [427/766] Linking target lib/librte_ipsec.so.25.0 00:02:34.536 [428/766] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:34.796 [429/766] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:34.796 [430/766] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:34.796 [431/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:34.796 [432/766] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:34.796 [433/766] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:35.056 [434/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:35.056 [435/766] Linking static target lib/librte_pdcp.a 00:02:35.056 [436/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:35.056 [437/766] Linking static target lib/librte_fib.a 00:02:35.315 [438/766] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:35.315 [439/766] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.315 [440/766] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:35.315 [441/766] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:35.315 [442/766] Linking target lib/librte_pdcp.so.25.0 00:02:35.315 [443/766] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.315 [444/766] Linking target lib/librte_fib.so.25.0 00:02:35.575 [445/766] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:35.575 [446/766] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:35.834 [447/766] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:35.834 [448/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:35.834 [449/766] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:36.093 [450/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:36.093 [451/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:36.093 [452/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:36.353 [453/766] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:36.353 [454/766] Linking static target lib/librte_port.a 00:02:36.353 [455/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:36.353 [456/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:36.353 [457/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:36.353 [458/766] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:36.353 [459/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:36.353 [460/766] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:36.612 [461/766] Linking static target lib/librte_pdump.a 00:02:36.612 [462/766] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:36.612 [463/766] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.872 [464/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:36.872 [465/766] Linking target lib/librte_pdump.so.25.0 00:02:36.872 [466/766] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.872 [467/766] Linking target lib/librte_port.so.25.0 00:02:36.872 [468/766] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:37.132 [469/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:37.132 [470/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:37.132 [471/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:37.132 [472/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:37.132 [473/766] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:37.391 [474/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:37.391 [475/766] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:37.391 [476/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:37.391 [477/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:37.391 [478/766] Linking static target lib/librte_table.a 00:02:37.676 [479/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:37.676 [480/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.936 [481/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:37.936 [482/766] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.936 [483/766] Linking target lib/librte_table.so.25.0 00:02:37.936 [484/766] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:38.195 [485/766] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:38.195 [486/766] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:38.195 [487/766] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:38.195 [488/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:38.454 [489/766] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:38.454 [490/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:38.713 [491/766] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:38.713 [492/766] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:38.713 [493/766] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:38.713 [494/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:38.973 [495/766] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:38.973 [496/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:38.973 [497/766] Linking static target lib/librte_graph.a 00:02:38.973 [498/766] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:38.973 [499/766] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:39.233 [500/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:39.233 [501/766] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:39.493 [502/766] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:39.493 [503/766] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:39.493 [504/766] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.493 [505/766] Linking target lib/librte_graph.so.25.0 00:02:39.493 [506/766] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:39.753 [507/766] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:39.753 [508/766] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:39.753 [509/766] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:39.753 [510/766] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:40.012 [511/766] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:40.012 [512/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.012 [513/766] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:40.012 [514/766] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:40.012 [515/766] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:40.272 [516/766] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:40.272 [517/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.272 [518/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.272 [519/766] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:40.272 [520/766] Linking static target lib/librte_node.a 00:02:40.272 [521/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:40.530 [522/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.530 [523/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.530 [524/766] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.790 [525/766] Linking target lib/librte_node.so.25.0 00:02:40.790 [526/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:40.790 [527/766] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.790 [528/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:40.790 [529/766] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:40.790 [530/766] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.050 [531/766] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.050 [532/766] Linking static target drivers/librte_bus_vdev.a 00:02:41.050 [533/766] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.050 [534/766] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.050 [535/766] Linking static target drivers/librte_bus_pci.a 00:02:41.050 [536/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:41.050 [537/766] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.050 [538/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:41.050 [539/766] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.050 [540/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:41.050 [541/766] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.311 [542/766] Linking target drivers/librte_bus_vdev.so.25.0 00:02:41.311 [543/766] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.311 [544/766] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.311 [545/766] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:41.311 [546/766] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.311 [547/766] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.311 [548/766] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.311 [549/766] Linking static target drivers/librte_mempool_ring.a 00:02:41.571 [550/766] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.571 [551/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:41.571 [552/766] Linking target drivers/librte_bus_pci.so.25.0 00:02:41.571 [553/766] Linking target drivers/librte_mempool_ring.so.25.0 00:02:41.571 [554/766] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:41.835 [555/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:41.835 [556/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:42.095 [557/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:42.095 [558/766] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:42.664 [559/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:42.924 [560/766] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:42.924 [561/766] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:42.924 [562/766] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:42.924 [563/766] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:42.924 [564/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:43.184 [565/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:43.444 [566/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:43.444 [567/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:43.444 [568/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:43.444 [569/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:43.704 [570/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:43.965 [571/766] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:02:43.965 [572/766] Linking static target drivers/libtmp_rte_power_acpi.a 00:02:43.965 [573/766] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:02:43.965 [574/766] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:02:43.965 [575/766] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:02:43.965 [576/766] Linking static target drivers/libtmp_rte_power_cppc.a 00:02:43.965 [577/766] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:02:44.226 [578/766] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:44.226 [579/766] Linking static target drivers/librte_power_acpi.a 00:02:44.226 [580/766] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:02:44.226 [581/766] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:02:44.226 [582/766] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:44.226 [583/766] Linking static target drivers/librte_power_amd_pstate.a 00:02:44.226 [584/766] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:02:44.226 [585/766] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:02:44.226 [586/766] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:02:44.226 [587/766] Linking target drivers/librte_power_acpi.so.25.0 00:02:44.226 [588/766] Linking target drivers/librte_power_amd_pstate.so.25.0 00:02:44.226 [589/766] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:02:44.226 [590/766] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:44.226 [591/766] Linking static target drivers/librte_power_cppc.a 00:02:44.226 [592/766] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:02:44.226 [593/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:02:44.226 [594/766] Linking target drivers/librte_power_cppc.so.25.0 00:02:44.226 [595/766] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:02:44.226 [596/766] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:44.226 [597/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:02:44.485 [598/766] Linking static target drivers/librte_power_intel_pstate.a 00:02:44.485 [599/766] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:02:44.485 [600/766] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:02:44.485 [601/766] Linking target drivers/librte_power_intel_pstate.so.25.0 00:02:44.485 [602/766] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:44.485 [603/766] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:02:44.485 [604/766] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:44.485 [605/766] Linking static target drivers/librte_power_kvm_vm.a 00:02:44.744 [606/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:44.744 [607/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:44.744 [608/766] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:02:44.744 [609/766] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:02:44.744 [610/766] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:02:44.744 [611/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:44.744 [612/766] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.744 [613/766] Linking target drivers/librte_power_kvm_vm.so.25.0 00:02:44.744 [614/766] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:02:44.744 [615/766] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:44.744 [616/766] Linking static target drivers/librte_power_intel_uncore.a 00:02:44.744 [617/766] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:02:44.744 [618/766] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:44.744 [619/766] Linking target drivers/librte_power_intel_uncore.so.25.0 00:02:45.004 [620/766] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:45.004 [621/766] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:45.004 [622/766] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:45.264 [623/766] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:45.264 [624/766] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:45.264 [625/766] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:45.264 [626/766] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:45.264 [627/766] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:45.523 [628/766] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:45.523 [629/766] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:45.523 [630/766] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:45.523 [631/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:45.523 [632/766] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:45.523 [633/766] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:45.782 [634/766] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:45.782 [635/766] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:45.782 [636/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:45.782 [637/766] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.782 [638/766] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.042 [639/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:46.042 [640/766] Linking static target drivers/librte_net_i40e.a 00:02:46.042 [641/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:46.042 [642/766] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:46.302 [643/766] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:46.302 [644/766] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:46.302 [645/766] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.562 [646/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:46.562 [647/766] Linking target drivers/librte_net_i40e.so.25.0 00:02:46.562 [648/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:46.562 [649/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:46.822 [650/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:46.822 [651/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:46.822 [652/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:47.081 [653/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.081 [654/766] Linking static target lib/librte_vhost.a 00:02:47.081 [655/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:47.081 [656/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:47.081 [657/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:47.341 [658/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:47.341 [659/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:47.600 [660/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:47.600 [661/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:47.600 [662/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:47.600 [663/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:47.858 [664/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:47.858 [665/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:47.858 [666/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:47.858 [667/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:47.858 [668/766] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.858 [669/766] Linking target lib/librte_vhost.so.25.0 00:02:47.858 [670/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:48.117 [671/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:48.117 [672/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:48.117 [673/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:48.376 [674/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:48.376 [675/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:48.376 [676/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:48.636 [677/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:48.896 [678/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:49.156 [679/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:49.156 [680/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:49.156 [681/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:49.156 [682/766] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:49.416 [683/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:49.416 [684/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:49.416 [685/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:49.416 [686/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:49.676 [687/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:49.676 [688/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:49.676 [689/766] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:49.676 [690/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:49.676 [691/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:49.936 [692/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:49.936 [693/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:49.936 [694/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:50.197 [695/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:50.197 [696/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:50.197 [697/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:50.197 [698/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:50.197 [699/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:50.456 [700/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:50.456 [701/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:50.456 [702/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:50.456 [703/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:50.715 [704/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:50.715 [705/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:50.974 [706/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:50.974 [707/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:50.974 [708/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:50.974 [709/766] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:51.234 [710/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:51.234 [711/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:51.234 [712/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:51.234 [713/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:51.495 [714/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:51.495 [715/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:51.794 [716/766] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:51.794 [717/766] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:02:51.794 [718/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:52.079 [719/766] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:52.079 [720/766] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:52.079 [721/766] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:52.338 [722/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:52.338 [723/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:52.597 [724/766] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:52.597 [725/766] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:52.597 [726/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:52.597 [727/766] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:52.857 [728/766] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:52.857 [729/766] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:53.117 [730/766] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:53.117 [731/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:53.117 [732/766] Linking static target lib/librte_pipeline.a 00:02:53.377 [733/766] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:53.377 [734/766] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:53.636 [735/766] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:53.637 [736/766] Linking target app/dpdk-dumpcap 00:02:53.637 [737/766] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:53.895 [738/766] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:53.895 [739/766] Linking target app/dpdk-graph 00:02:53.895 [740/766] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:53.895 [741/766] Linking target app/dpdk-pdump 00:02:53.895 [742/766] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:53.895 [743/766] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:53.895 [744/766] Linking target app/dpdk-proc-info 00:02:54.155 [745/766] Linking target app/dpdk-test-acl 00:02:54.155 [746/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:54.155 [747/766] Linking target app/dpdk-test-cmdline 00:02:54.155 [748/766] Linking target app/dpdk-test-bbdev 00:02:54.156 [749/766] Linking target app/dpdk-test-compress-perf 00:02:54.415 [750/766] Linking target app/dpdk-test-crypto-perf 00:02:54.415 [751/766] Linking target app/dpdk-test-dma-perf 00:02:54.415 [752/766] Linking target app/dpdk-test-eventdev 00:02:54.415 [753/766] Linking target app/dpdk-test-fib 00:02:54.415 [754/766] Linking target app/dpdk-test-flow-perf 00:02:54.415 [755/766] Linking target app/dpdk-test-gpudev 00:02:54.676 [756/766] Linking target app/dpdk-test-mldev 00:02:54.676 [757/766] Linking target app/dpdk-test-pipeline 00:02:54.676 [758/766] Linking target app/dpdk-test-sad 00:02:54.676 [759/766] Linking target app/dpdk-test-regex 00:02:54.676 [760/766] Linking target app/dpdk-testpmd 00:02:54.935 [761/766] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:54.936 [762/766] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:55.504 [763/766] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:55.764 [764/766] Linking target app/dpdk-test-security-perf 00:02:57.676 [765/766] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.676 [766/766] Linking target lib/librte_pipeline.so.25.0 00:02:57.936 15:18:56 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:57.936 15:18:56 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:57.936 15:18:56 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:57.936 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:57.936 [0/1] Installing files. 00:02:58.201 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.201 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.201 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.202 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.203 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.204 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.205 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.206 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.206 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.206 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.782 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:58.782 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.782 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.783 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.784 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:58.785 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:58.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:58.786 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:58.786 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:02:58.786 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:58.786 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:02:58.786 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:58.786 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:02:58.786 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:02:58.786 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:02:58.786 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:58.786 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:02:58.786 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:58.786 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:02:58.786 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:58.786 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:02:58.786 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:58.786 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:02:58.786 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:58.786 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:02:58.786 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:58.786 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:02:58.786 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:58.786 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:02:58.786 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:58.786 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:02:58.786 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:58.786 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:02:58.786 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:58.786 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:02:58.786 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:58.786 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:02:58.786 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:58.786 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:02:58.786 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:58.786 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:02:58.786 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:58.786 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:02:58.786 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:58.786 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:02:58.786 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:58.786 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:02:58.786 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:58.786 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:02:58.786 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:58.786 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:02:58.786 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:58.786 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:02:58.786 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:58.786 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:02:58.786 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:58.786 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:02:58.786 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:58.786 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:02:58.786 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:58.786 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:02:58.786 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:58.786 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:02:58.786 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:58.786 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:02:58.786 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:58.786 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:02:58.786 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:58.786 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:02:58.786 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:58.786 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:02:58.786 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:58.786 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:02:58.786 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:58.786 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:02:58.786 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:58.786 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:02:58.786 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:58.786 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:02:58.786 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:58.786 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:02:58.786 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:58.786 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:02:58.786 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:58.786 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:02:58.786 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:58.786 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:02:58.786 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:58.786 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:02:58.786 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:58.786 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:02:58.786 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:58.786 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:02:58.786 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:58.786 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:02:58.787 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:58.787 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:02:58.787 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:58.787 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:02:58.787 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:58.787 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:02:58.787 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:58.787 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:02:58.787 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:58.787 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:02:58.787 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:58.787 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:02:58.787 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:58.787 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:02:58.787 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:58.787 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:02:58.787 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:58.787 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:02:58.787 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:58.787 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:02:58.787 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:58.787 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:02:58.787 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:58.787 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:02:58.787 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:58.787 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:02:58.787 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:58.787 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:02:58.787 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:02:58.787 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:02:58.787 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:02:58.787 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:02:58.787 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:02:58.787 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:02:58.787 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:02:58.787 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:02:58.787 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:02:58.787 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:02:58.787 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:02:58.787 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:02:58.787 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:02:58.787 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:02:58.787 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:02:58.787 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:02:58.787 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:02:58.787 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:02:58.787 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:02:58.787 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:02:58.787 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:02:58.787 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:02:58.787 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:02:58.787 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:02:58.787 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:02:58.787 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:02:58.787 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:02:58.787 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:02:58.787 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:02:58.787 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:02:58.787 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:02:58.787 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:02:58.787 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:02:58.787 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:02:58.787 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:02:58.787 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:02:58.787 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:02:58.787 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:02:58.787 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:02:58.787 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:02:58.787 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:02:58.787 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:02:58.787 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:02:58.787 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:02:58.787 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:02:58.787 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:02:58.787 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:02:58.787 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:02:58.787 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:02:58.787 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:02:58.787 15:18:57 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:58.787 ************************************ 00:02:58.787 END TEST build_native_dpdk 00:02:58.787 ************************************ 00:02:58.787 15:18:57 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:58.787 00:02:58.787 real 0m49.378s 00:02:58.787 user 5m19.870s 00:02:58.787 sys 0m58.682s 00:02:58.787 15:18:57 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:58.787 15:18:57 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:58.787 15:18:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.787 15:18:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.787 15:18:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.787 15:18:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.787 15:18:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.787 15:18:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.787 15:18:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.787 15:18:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:59.048 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:59.048 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.048 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:59.048 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.307 Using 'verbs' RDMA provider 00:03:15.127 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:33.214 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:33.214 Creating mk/config.mk...done. 00:03:33.214 Creating mk/cc.flags.mk...done. 00:03:33.214 Type 'make' to build. 00:03:33.214 15:19:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:33.214 15:19:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:33.214 15:19:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:33.214 15:19:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.214 ************************************ 00:03:33.214 START TEST make 00:03:33.214 ************************************ 00:03:33.214 15:19:29 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:33.214 make[1]: Nothing to be done for 'all'. 00:04:19.902 CC lib/log/log.o 00:04:19.902 CC lib/log/log_deprecated.o 00:04:19.902 CC lib/log/log_flags.o 00:04:19.902 CC lib/ut_mock/mock.o 00:04:19.902 CC lib/ut/ut.o 00:04:19.902 LIB libspdk_ut_mock.a 00:04:19.902 LIB libspdk_log.a 00:04:19.902 LIB libspdk_ut.a 00:04:19.902 SO libspdk_ut_mock.so.6.0 00:04:19.902 SO libspdk_ut.so.2.0 00:04:19.902 SO libspdk_log.so.7.1 00:04:19.902 SYMLINK libspdk_ut_mock.so 00:04:19.902 SYMLINK libspdk_ut.so 00:04:19.902 SYMLINK libspdk_log.so 00:04:19.902 CC lib/util/base64.o 00:04:19.902 CC lib/util/cpuset.o 00:04:19.902 CC lib/util/crc16.o 00:04:19.902 CC lib/util/crc32c.o 00:04:19.902 CC lib/util/bit_array.o 00:04:19.902 CC lib/dma/dma.o 00:04:19.902 CC lib/util/crc32.o 00:04:19.902 CC lib/ioat/ioat.o 00:04:19.902 CXX lib/trace_parser/trace.o 00:04:19.902 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.902 CC lib/util/crc32_ieee.o 00:04:19.902 CC lib/vfio_user/host/vfio_user.o 00:04:19.902 CC lib/util/crc64.o 00:04:19.902 CC lib/util/dif.o 00:04:19.902 LIB libspdk_dma.a 00:04:19.902 CC lib/util/fd.o 00:04:19.902 SO libspdk_dma.so.5.0 00:04:19.902 CC lib/util/fd_group.o 00:04:19.902 CC lib/util/file.o 00:04:19.902 CC lib/util/hexlify.o 00:04:19.902 SYMLINK libspdk_dma.so 00:04:19.902 CC lib/util/iov.o 00:04:19.902 LIB libspdk_ioat.a 00:04:19.902 SO libspdk_ioat.so.7.0 00:04:19.902 CC lib/util/math.o 00:04:19.902 CC lib/util/net.o 00:04:19.902 LIB libspdk_vfio_user.a 00:04:19.902 SYMLINK libspdk_ioat.so 00:04:19.902 CC lib/util/pipe.o 00:04:19.902 SO libspdk_vfio_user.so.5.0 00:04:19.902 CC lib/util/strerror_tls.o 00:04:19.902 CC lib/util/string.o 00:04:19.902 SYMLINK libspdk_vfio_user.so 00:04:19.902 CC lib/util/uuid.o 00:04:19.902 CC lib/util/xor.o 00:04:19.902 CC lib/util/zipf.o 00:04:19.902 CC lib/util/md5.o 00:04:19.902 LIB libspdk_util.a 00:04:19.902 LIB libspdk_trace_parser.a 00:04:19.902 SO libspdk_util.so.10.1 00:04:19.902 SO libspdk_trace_parser.so.6.0 00:04:19.902 SYMLINK libspdk_util.so 00:04:19.902 SYMLINK libspdk_trace_parser.so 00:04:19.902 CC lib/conf/conf.o 00:04:19.902 CC lib/idxd/idxd.o 00:04:19.902 CC lib/idxd/idxd_user.o 00:04:19.902 CC lib/idxd/idxd_kernel.o 00:04:19.902 CC lib/env_dpdk/env.o 00:04:19.902 CC lib/env_dpdk/memory.o 00:04:19.902 CC lib/env_dpdk/pci.o 00:04:19.902 CC lib/json/json_parse.o 00:04:19.902 CC lib/rdma_utils/rdma_utils.o 00:04:19.902 CC lib/vmd/vmd.o 00:04:19.902 CC lib/vmd/led.o 00:04:19.902 LIB libspdk_conf.a 00:04:19.902 CC lib/json/json_util.o 00:04:19.902 SO libspdk_conf.so.6.0 00:04:19.902 CC lib/env_dpdk/init.o 00:04:19.902 LIB libspdk_rdma_utils.a 00:04:19.902 SYMLINK libspdk_conf.so 00:04:19.902 CC lib/env_dpdk/threads.o 00:04:19.902 CC lib/env_dpdk/pci_ioat.o 00:04:19.902 SO libspdk_rdma_utils.so.1.0 00:04:19.902 SYMLINK libspdk_rdma_utils.so 00:04:19.902 CC lib/env_dpdk/pci_virtio.o 00:04:19.902 CC lib/env_dpdk/pci_vmd.o 00:04:19.902 CC lib/env_dpdk/pci_idxd.o 00:04:19.902 CC lib/json/json_write.o 00:04:19.902 CC lib/rdma_provider/common.o 00:04:19.902 CC lib/env_dpdk/pci_event.o 00:04:19.902 CC lib/env_dpdk/sigbus_handler.o 00:04:19.902 CC lib/env_dpdk/pci_dpdk.o 00:04:19.902 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.902 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.902 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.902 LIB libspdk_idxd.a 00:04:19.902 LIB libspdk_vmd.a 00:04:19.902 SO libspdk_idxd.so.12.1 00:04:19.902 LIB libspdk_json.a 00:04:19.902 SO libspdk_vmd.so.6.0 00:04:19.902 SYMLINK libspdk_idxd.so 00:04:19.902 SO libspdk_json.so.6.0 00:04:19.902 SYMLINK libspdk_vmd.so 00:04:19.902 LIB libspdk_rdma_provider.a 00:04:19.902 SYMLINK libspdk_json.so 00:04:19.902 SO libspdk_rdma_provider.so.7.0 00:04:19.902 SYMLINK libspdk_rdma_provider.so 00:04:19.902 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.902 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.902 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.902 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.902 LIB libspdk_env_dpdk.a 00:04:19.902 LIB libspdk_jsonrpc.a 00:04:19.902 SO libspdk_jsonrpc.so.6.0 00:04:19.902 SO libspdk_env_dpdk.so.15.1 00:04:19.902 SYMLINK libspdk_jsonrpc.so 00:04:19.902 SYMLINK libspdk_env_dpdk.so 00:04:19.902 CC lib/rpc/rpc.o 00:04:19.902 LIB libspdk_rpc.a 00:04:19.902 SO libspdk_rpc.so.6.0 00:04:19.902 SYMLINK libspdk_rpc.so 00:04:19.902 CC lib/notify/notify.o 00:04:19.902 CC lib/notify/notify_rpc.o 00:04:19.903 CC lib/keyring/keyring_rpc.o 00:04:19.903 CC lib/keyring/keyring.o 00:04:19.903 CC lib/trace/trace.o 00:04:19.903 CC lib/trace/trace_flags.o 00:04:19.903 CC lib/trace/trace_rpc.o 00:04:19.903 LIB libspdk_notify.a 00:04:19.903 SO libspdk_notify.so.6.0 00:04:19.903 LIB libspdk_keyring.a 00:04:19.903 SYMLINK libspdk_notify.so 00:04:19.903 SO libspdk_keyring.so.2.0 00:04:19.903 LIB libspdk_trace.a 00:04:19.903 SYMLINK libspdk_keyring.so 00:04:19.903 SO libspdk_trace.so.11.0 00:04:19.903 SYMLINK libspdk_trace.so 00:04:19.903 CC lib/thread/thread.o 00:04:19.903 CC lib/thread/iobuf.o 00:04:20.162 CC lib/sock/sock_rpc.o 00:04:20.162 CC lib/sock/sock.o 00:04:20.420 LIB libspdk_sock.a 00:04:20.420 SO libspdk_sock.so.10.0 00:04:20.679 SYMLINK libspdk_sock.so 00:04:20.941 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:20.941 CC lib/nvme/nvme_ctrlr.o 00:04:20.941 CC lib/nvme/nvme_fabric.o 00:04:20.941 CC lib/nvme/nvme_ns_cmd.o 00:04:20.941 CC lib/nvme/nvme_ns.o 00:04:20.941 CC lib/nvme/nvme_pcie.o 00:04:20.941 CC lib/nvme/nvme_pcie_common.o 00:04:20.941 CC lib/nvme/nvme.o 00:04:20.941 CC lib/nvme/nvme_qpair.o 00:04:21.514 LIB libspdk_thread.a 00:04:21.514 CC lib/nvme/nvme_quirks.o 00:04:21.514 SO libspdk_thread.so.11.0 00:04:21.514 CC lib/nvme/nvme_transport.o 00:04:21.774 CC lib/nvme/nvme_discovery.o 00:04:21.774 SYMLINK libspdk_thread.so 00:04:21.774 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:21.774 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:21.774 CC lib/nvme/nvme_tcp.o 00:04:21.774 CC lib/nvme/nvme_opal.o 00:04:21.774 CC lib/nvme/nvme_io_msg.o 00:04:22.034 CC lib/nvme/nvme_poll_group.o 00:04:22.034 CC lib/nvme/nvme_zns.o 00:04:22.294 CC lib/nvme/nvme_stubs.o 00:04:22.294 CC lib/nvme/nvme_auth.o 00:04:22.294 CC lib/nvme/nvme_cuse.o 00:04:22.294 CC lib/nvme/nvme_rdma.o 00:04:22.554 CC lib/accel/accel.o 00:04:22.555 CC lib/blob/blobstore.o 00:04:22.815 CC lib/init/json_config.o 00:04:22.815 CC lib/virtio/virtio.o 00:04:22.815 CC lib/fsdev/fsdev.o 00:04:23.075 CC lib/init/subsystem.o 00:04:23.075 CC lib/init/subsystem_rpc.o 00:04:23.075 CC lib/virtio/virtio_vhost_user.o 00:04:23.075 CC lib/blob/request.o 00:04:23.335 CC lib/init/rpc.o 00:04:23.335 CC lib/virtio/virtio_vfio_user.o 00:04:23.335 CC lib/virtio/virtio_pci.o 00:04:23.335 LIB libspdk_init.a 00:04:23.335 SO libspdk_init.so.6.0 00:04:23.595 CC lib/blob/zeroes.o 00:04:23.595 CC lib/blob/blob_bs_dev.o 00:04:23.595 SYMLINK libspdk_init.so 00:04:23.595 CC lib/fsdev/fsdev_io.o 00:04:23.595 CC lib/accel/accel_rpc.o 00:04:23.595 CC lib/fsdev/fsdev_rpc.o 00:04:23.595 LIB libspdk_virtio.a 00:04:23.595 CC lib/accel/accel_sw.o 00:04:23.595 CC lib/event/app.o 00:04:23.595 CC lib/event/reactor.o 00:04:23.595 SO libspdk_virtio.so.7.0 00:04:23.854 CC lib/event/log_rpc.o 00:04:23.854 CC lib/event/app_rpc.o 00:04:23.854 SYMLINK libspdk_virtio.so 00:04:23.854 CC lib/event/scheduler_static.o 00:04:23.854 LIB libspdk_nvme.a 00:04:23.854 LIB libspdk_fsdev.a 00:04:24.114 SO libspdk_nvme.so.15.0 00:04:24.114 SO libspdk_fsdev.so.2.0 00:04:24.114 LIB libspdk_accel.a 00:04:24.114 SYMLINK libspdk_fsdev.so 00:04:24.114 SO libspdk_accel.so.16.0 00:04:24.114 SYMLINK libspdk_accel.so 00:04:24.114 LIB libspdk_event.a 00:04:24.375 SYMLINK libspdk_nvme.so 00:04:24.375 SO libspdk_event.so.14.0 00:04:24.375 SYMLINK libspdk_event.so 00:04:24.375 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:24.636 CC lib/bdev/bdev.o 00:04:24.636 CC lib/bdev/bdev_rpc.o 00:04:24.636 CC lib/bdev/bdev_zone.o 00:04:24.636 CC lib/bdev/part.o 00:04:24.636 CC lib/bdev/scsi_nvme.o 00:04:25.207 LIB libspdk_fuse_dispatcher.a 00:04:25.207 SO libspdk_fuse_dispatcher.so.1.0 00:04:25.207 SYMLINK libspdk_fuse_dispatcher.so 00:04:26.147 LIB libspdk_blob.a 00:04:26.407 SO libspdk_blob.so.12.0 00:04:26.407 SYMLINK libspdk_blob.so 00:04:26.978 CC lib/lvol/lvol.o 00:04:26.978 CC lib/blobfs/blobfs.o 00:04:26.978 CC lib/blobfs/tree.o 00:04:27.238 LIB libspdk_bdev.a 00:04:27.498 SO libspdk_bdev.so.17.0 00:04:27.498 SYMLINK libspdk_bdev.so 00:04:27.758 LIB libspdk_blobfs.a 00:04:27.758 CC lib/scsi/dev.o 00:04:27.758 CC lib/scsi/port.o 00:04:27.758 CC lib/scsi/scsi.o 00:04:27.758 CC lib/scsi/lun.o 00:04:27.758 CC lib/ublk/ublk.o 00:04:27.758 CC lib/nvmf/ctrlr.o 00:04:27.758 CC lib/ftl/ftl_core.o 00:04:27.758 CC lib/nbd/nbd.o 00:04:27.758 SO libspdk_blobfs.so.11.0 00:04:27.758 SYMLINK libspdk_blobfs.so 00:04:27.758 CC lib/nbd/nbd_rpc.o 00:04:27.758 LIB libspdk_lvol.a 00:04:27.758 SO libspdk_lvol.so.11.0 00:04:27.758 CC lib/ftl/ftl_init.o 00:04:27.758 CC lib/ftl/ftl_layout.o 00:04:28.019 SYMLINK libspdk_lvol.so 00:04:28.019 CC lib/ublk/ublk_rpc.o 00:04:28.019 CC lib/nvmf/ctrlr_discovery.o 00:04:28.019 CC lib/nvmf/ctrlr_bdev.o 00:04:28.019 CC lib/scsi/scsi_bdev.o 00:04:28.019 CC lib/ftl/ftl_debug.o 00:04:28.019 CC lib/ftl/ftl_io.o 00:04:28.019 CC lib/ftl/ftl_sb.o 00:04:28.279 LIB libspdk_nbd.a 00:04:28.279 SO libspdk_nbd.so.7.0 00:04:28.279 CC lib/nvmf/subsystem.o 00:04:28.279 SYMLINK libspdk_nbd.so 00:04:28.279 CC lib/nvmf/nvmf.o 00:04:28.279 CC lib/ftl/ftl_l2p.o 00:04:28.279 CC lib/nvmf/nvmf_rpc.o 00:04:28.279 CC lib/ftl/ftl_l2p_flat.o 00:04:28.539 LIB libspdk_ublk.a 00:04:28.539 CC lib/ftl/ftl_nv_cache.o 00:04:28.539 SO libspdk_ublk.so.3.0 00:04:28.539 CC lib/ftl/ftl_band.o 00:04:28.539 SYMLINK libspdk_ublk.so 00:04:28.539 CC lib/scsi/scsi_pr.o 00:04:28.539 CC lib/ftl/ftl_band_ops.o 00:04:28.539 CC lib/ftl/ftl_writer.o 00:04:28.799 CC lib/nvmf/transport.o 00:04:28.799 CC lib/ftl/ftl_rq.o 00:04:29.059 CC lib/scsi/scsi_rpc.o 00:04:29.059 CC lib/nvmf/tcp.o 00:04:29.059 CC lib/nvmf/stubs.o 00:04:29.059 CC lib/ftl/ftl_reloc.o 00:04:29.059 CC lib/scsi/task.o 00:04:29.320 CC lib/nvmf/mdns_server.o 00:04:29.320 LIB libspdk_scsi.a 00:04:29.320 CC lib/nvmf/rdma.o 00:04:29.320 SO libspdk_scsi.so.9.0 00:04:29.580 SYMLINK libspdk_scsi.so 00:04:29.580 CC lib/ftl/ftl_l2p_cache.o 00:04:29.580 CC lib/ftl/ftl_p2l.o 00:04:29.580 CC lib/nvmf/auth.o 00:04:29.580 CC lib/ftl/ftl_p2l_log.o 00:04:29.580 CC lib/ftl/mngt/ftl_mngt.o 00:04:29.840 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:29.840 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:29.840 CC lib/iscsi/conn.o 00:04:29.840 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.840 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:30.133 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:30.133 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:30.133 CC lib/vhost/vhost.o 00:04:30.133 CC lib/vhost/vhost_rpc.o 00:04:30.133 CC lib/vhost/vhost_scsi.o 00:04:30.133 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:30.418 CC lib/vhost/vhost_blk.o 00:04:30.418 CC lib/vhost/rte_vhost_user.o 00:04:30.418 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:30.418 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:30.677 CC lib/iscsi/init_grp.o 00:04:30.677 CC lib/iscsi/iscsi.o 00:04:30.677 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:30.677 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:30.937 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:30.937 CC lib/iscsi/param.o 00:04:30.937 CC lib/ftl/utils/ftl_conf.o 00:04:30.937 CC lib/ftl/utils/ftl_md.o 00:04:30.937 CC lib/ftl/utils/ftl_mempool.o 00:04:30.937 CC lib/ftl/utils/ftl_bitmap.o 00:04:31.194 CC lib/ftl/utils/ftl_property.o 00:04:31.194 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:31.194 CC lib/iscsi/portal_grp.o 00:04:31.194 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:31.194 CC lib/iscsi/tgt_node.o 00:04:31.194 CC lib/iscsi/iscsi_subsystem.o 00:04:31.452 CC lib/iscsi/iscsi_rpc.o 00:04:31.452 LIB libspdk_vhost.a 00:04:31.452 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:31.452 SO libspdk_vhost.so.8.0 00:04:31.452 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:31.452 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:31.452 CC lib/iscsi/task.o 00:04:31.452 SYMLINK libspdk_vhost.so 00:04:31.452 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:31.710 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:31.710 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:31.710 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:31.710 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:31.710 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:31.710 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:31.710 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:31.710 CC lib/ftl/base/ftl_base_dev.o 00:04:31.710 CC lib/ftl/base/ftl_base_bdev.o 00:04:31.969 CC lib/ftl/ftl_trace.o 00:04:31.969 LIB libspdk_nvmf.a 00:04:31.969 SO libspdk_nvmf.so.20.0 00:04:32.227 LIB libspdk_ftl.a 00:04:32.227 SYMLINK libspdk_nvmf.so 00:04:32.227 LIB libspdk_iscsi.a 00:04:32.486 SO libspdk_iscsi.so.8.0 00:04:32.486 SO libspdk_ftl.so.9.0 00:04:32.486 SYMLINK libspdk_iscsi.so 00:04:32.486 SYMLINK libspdk_ftl.so 00:04:33.053 CC module/env_dpdk/env_dpdk_rpc.o 00:04:33.053 CC module/fsdev/aio/fsdev_aio.o 00:04:33.053 CC module/accel/ioat/accel_ioat.o 00:04:33.053 CC module/accel/error/accel_error.o 00:04:33.053 CC module/sock/posix/posix.o 00:04:33.053 CC module/accel/dsa/accel_dsa.o 00:04:33.053 CC module/accel/iaa/accel_iaa.o 00:04:33.053 CC module/blob/bdev/blob_bdev.o 00:04:33.053 CC module/keyring/file/keyring.o 00:04:33.053 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:33.053 LIB libspdk_env_dpdk_rpc.a 00:04:33.053 SO libspdk_env_dpdk_rpc.so.6.0 00:04:33.311 SYMLINK libspdk_env_dpdk_rpc.so 00:04:33.311 CC module/keyring/file/keyring_rpc.o 00:04:33.311 CC module/accel/ioat/accel_ioat_rpc.o 00:04:33.311 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:33.311 CC module/accel/iaa/accel_iaa_rpc.o 00:04:33.311 CC module/accel/error/accel_error_rpc.o 00:04:33.311 LIB libspdk_scheduler_dynamic.a 00:04:33.311 SO libspdk_scheduler_dynamic.so.4.0 00:04:33.311 LIB libspdk_keyring_file.a 00:04:33.311 LIB libspdk_accel_ioat.a 00:04:33.311 LIB libspdk_blob_bdev.a 00:04:33.311 SYMLINK libspdk_scheduler_dynamic.so 00:04:33.311 CC module/accel/dsa/accel_dsa_rpc.o 00:04:33.311 SO libspdk_keyring_file.so.2.0 00:04:33.311 SO libspdk_accel_ioat.so.6.0 00:04:33.311 SO libspdk_blob_bdev.so.12.0 00:04:33.311 LIB libspdk_accel_iaa.a 00:04:33.311 CC module/fsdev/aio/linux_aio_mgr.o 00:04:33.311 LIB libspdk_accel_error.a 00:04:33.311 SYMLINK libspdk_keyring_file.so 00:04:33.311 SO libspdk_accel_iaa.so.3.0 00:04:33.311 SYMLINK libspdk_accel_ioat.so 00:04:33.311 SYMLINK libspdk_blob_bdev.so 00:04:33.569 SO libspdk_accel_error.so.2.0 00:04:33.569 SYMLINK libspdk_accel_iaa.so 00:04:33.569 LIB libspdk_accel_dsa.a 00:04:33.569 SYMLINK libspdk_accel_error.so 00:04:33.569 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:33.569 SO libspdk_accel_dsa.so.5.0 00:04:33.569 CC module/keyring/linux/keyring.o 00:04:33.569 SYMLINK libspdk_accel_dsa.so 00:04:33.569 CC module/keyring/linux/keyring_rpc.o 00:04:33.569 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.569 CC module/bdev/delay/vbdev_delay.o 00:04:33.569 CC module/bdev/error/vbdev_error.o 00:04:33.569 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.569 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.828 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.828 LIB libspdk_keyring_linux.a 00:04:33.828 SO libspdk_keyring_linux.so.1.0 00:04:33.828 LIB libspdk_scheduler_gscheduler.a 00:04:33.828 CC module/bdev/gpt/gpt.o 00:04:33.828 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.828 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.828 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.828 LIB libspdk_fsdev_aio.a 00:04:33.828 SYMLINK libspdk_keyring_linux.so 00:04:33.828 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.828 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.828 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.828 SO libspdk_fsdev_aio.so.1.0 00:04:33.828 LIB libspdk_sock_posix.a 00:04:33.828 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.828 SO libspdk_sock_posix.so.6.0 00:04:33.828 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.828 SYMLINK libspdk_fsdev_aio.so 00:04:33.828 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.828 LIB libspdk_bdev_error.a 00:04:34.086 SO libspdk_bdev_error.so.6.0 00:04:34.086 CC module/bdev/malloc/bdev_malloc.o 00:04:34.086 SYMLINK libspdk_sock_posix.so 00:04:34.086 SYMLINK libspdk_bdev_error.so 00:04:34.086 LIB libspdk_bdev_delay.a 00:04:34.086 LIB libspdk_blobfs_bdev.a 00:04:34.086 CC module/bdev/null/bdev_null.o 00:04:34.086 SO libspdk_bdev_delay.so.6.0 00:04:34.086 SO libspdk_blobfs_bdev.so.6.0 00:04:34.086 CC module/bdev/nvme/bdev_nvme.o 00:04:34.086 CC module/bdev/passthru/vbdev_passthru.o 00:04:34.086 SYMLINK libspdk_blobfs_bdev.so 00:04:34.086 SYMLINK libspdk_bdev_delay.so 00:04:34.345 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:34.345 CC module/bdev/raid/bdev_raid.o 00:04:34.345 LIB libspdk_bdev_gpt.a 00:04:34.345 SO libspdk_bdev_gpt.so.6.0 00:04:34.345 SYMLINK libspdk_bdev_gpt.so 00:04:34.345 CC module/bdev/null/bdev_null_rpc.o 00:04:34.345 CC module/bdev/raid/bdev_raid_rpc.o 00:04:34.345 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.345 LIB libspdk_bdev_malloc.a 00:04:34.345 LIB libspdk_bdev_lvol.a 00:04:34.345 CC module/bdev/split/vbdev_split.o 00:04:34.345 SO libspdk_bdev_malloc.so.6.0 00:04:34.345 SO libspdk_bdev_lvol.so.6.0 00:04:34.604 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.604 SYMLINK libspdk_bdev_malloc.so 00:04:34.604 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.604 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:34.604 SYMLINK libspdk_bdev_lvol.so 00:04:34.604 LIB libspdk_bdev_null.a 00:04:34.604 SO libspdk_bdev_null.so.6.0 00:04:34.604 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.604 SYMLINK libspdk_bdev_null.so 00:04:34.604 CC module/bdev/raid/raid0.o 00:04:34.604 LIB libspdk_bdev_passthru.a 00:04:34.604 CC module/bdev/aio/bdev_aio.o 00:04:34.863 SO libspdk_bdev_passthru.so.6.0 00:04:34.863 CC module/bdev/ftl/bdev_ftl.o 00:04:34.863 SYMLINK libspdk_bdev_passthru.so 00:04:34.863 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.864 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.864 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.864 LIB libspdk_bdev_split.a 00:04:34.864 LIB libspdk_bdev_zone_block.a 00:04:34.864 SO libspdk_bdev_split.so.6.0 00:04:34.864 SO libspdk_bdev_zone_block.so.6.0 00:04:34.864 SYMLINK libspdk_bdev_split.so 00:04:34.864 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.864 SYMLINK libspdk_bdev_zone_block.so 00:04:34.864 CC module/bdev/raid/raid1.o 00:04:34.864 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.864 CC module/bdev/raid/concat.o 00:04:35.122 LIB libspdk_bdev_aio.a 00:04:35.122 CC module/bdev/raid/raid5f.o 00:04:35.123 SO libspdk_bdev_aio.so.6.0 00:04:35.123 LIB libspdk_bdev_ftl.a 00:04:35.123 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:35.123 SO libspdk_bdev_ftl.so.6.0 00:04:35.123 SYMLINK libspdk_bdev_aio.so 00:04:35.381 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:35.381 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:35.381 CC module/bdev/nvme/nvme_rpc.o 00:04:35.381 CC module/bdev/nvme/bdev_mdns_client.o 00:04:35.381 SYMLINK libspdk_bdev_ftl.so 00:04:35.381 CC module/bdev/nvme/vbdev_opal.o 00:04:35.381 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:35.381 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:35.381 LIB libspdk_bdev_iscsi.a 00:04:35.381 SO libspdk_bdev_iscsi.so.6.0 00:04:35.381 LIB libspdk_bdev_virtio.a 00:04:35.640 SYMLINK libspdk_bdev_iscsi.so 00:04:35.640 SO libspdk_bdev_virtio.so.6.0 00:04:35.640 SYMLINK libspdk_bdev_virtio.so 00:04:35.640 LIB libspdk_bdev_raid.a 00:04:35.898 SO libspdk_bdev_raid.so.6.0 00:04:35.898 SYMLINK libspdk_bdev_raid.so 00:04:37.276 LIB libspdk_bdev_nvme.a 00:04:37.276 SO libspdk_bdev_nvme.so.7.1 00:04:37.276 SYMLINK libspdk_bdev_nvme.so 00:04:37.843 CC module/event/subsystems/iobuf/iobuf.o 00:04:37.843 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:37.843 CC module/event/subsystems/keyring/keyring.o 00:04:37.843 CC module/event/subsystems/sock/sock.o 00:04:37.843 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:37.843 CC module/event/subsystems/vmd/vmd.o 00:04:37.843 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:37.843 CC module/event/subsystems/scheduler/scheduler.o 00:04:37.843 CC module/event/subsystems/fsdev/fsdev.o 00:04:37.843 LIB libspdk_event_keyring.a 00:04:37.843 LIB libspdk_event_sock.a 00:04:37.843 LIB libspdk_event_vhost_blk.a 00:04:37.843 LIB libspdk_event_scheduler.a 00:04:37.843 SO libspdk_event_keyring.so.1.0 00:04:37.843 SO libspdk_event_sock.so.5.0 00:04:37.843 LIB libspdk_event_fsdev.a 00:04:37.843 LIB libspdk_event_iobuf.a 00:04:37.843 SO libspdk_event_scheduler.so.4.0 00:04:37.843 LIB libspdk_event_vmd.a 00:04:37.843 SO libspdk_event_fsdev.so.1.0 00:04:37.843 SO libspdk_event_vhost_blk.so.3.0 00:04:38.102 SO libspdk_event_iobuf.so.3.0 00:04:38.102 SO libspdk_event_vmd.so.6.0 00:04:38.102 SYMLINK libspdk_event_keyring.so 00:04:38.102 SYMLINK libspdk_event_sock.so 00:04:38.102 SYMLINK libspdk_event_scheduler.so 00:04:38.102 SYMLINK libspdk_event_fsdev.so 00:04:38.102 SYMLINK libspdk_event_vhost_blk.so 00:04:38.102 SYMLINK libspdk_event_vmd.so 00:04:38.102 SYMLINK libspdk_event_iobuf.so 00:04:38.361 CC module/event/subsystems/accel/accel.o 00:04:38.620 LIB libspdk_event_accel.a 00:04:38.620 SO libspdk_event_accel.so.6.0 00:04:38.620 SYMLINK libspdk_event_accel.so 00:04:39.185 CC module/event/subsystems/bdev/bdev.o 00:04:39.185 LIB libspdk_event_bdev.a 00:04:39.185 SO libspdk_event_bdev.so.6.0 00:04:39.443 SYMLINK libspdk_event_bdev.so 00:04:39.702 CC module/event/subsystems/scsi/scsi.o 00:04:39.702 CC module/event/subsystems/ublk/ublk.o 00:04:39.702 CC module/event/subsystems/nbd/nbd.o 00:04:39.702 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:39.702 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.961 LIB libspdk_event_nbd.a 00:04:39.961 LIB libspdk_event_ublk.a 00:04:39.961 LIB libspdk_event_scsi.a 00:04:39.961 SO libspdk_event_nbd.so.6.0 00:04:39.961 SO libspdk_event_ublk.so.3.0 00:04:39.961 SO libspdk_event_scsi.so.6.0 00:04:39.961 SYMLINK libspdk_event_ublk.so 00:04:39.961 SYMLINK libspdk_event_nbd.so 00:04:39.961 LIB libspdk_event_nvmf.a 00:04:39.961 SYMLINK libspdk_event_scsi.so 00:04:39.961 SO libspdk_event_nvmf.so.6.0 00:04:39.961 SYMLINK libspdk_event_nvmf.so 00:04:40.221 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:40.221 CC module/event/subsystems/iscsi/iscsi.o 00:04:40.479 LIB libspdk_event_vhost_scsi.a 00:04:40.479 LIB libspdk_event_iscsi.a 00:04:40.479 SO libspdk_event_vhost_scsi.so.3.0 00:04:40.479 SO libspdk_event_iscsi.so.6.0 00:04:40.479 SYMLINK libspdk_event_vhost_scsi.so 00:04:40.737 SYMLINK libspdk_event_iscsi.so 00:04:40.737 SO libspdk.so.6.0 00:04:40.737 SYMLINK libspdk.so 00:04:41.303 CXX app/trace/trace.o 00:04:41.303 CC app/trace_record/trace_record.o 00:04:41.303 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:41.303 CC app/iscsi_tgt/iscsi_tgt.o 00:04:41.303 CC app/nvmf_tgt/nvmf_main.o 00:04:41.303 CC examples/ioat/perf/perf.o 00:04:41.303 CC app/spdk_tgt/spdk_tgt.o 00:04:41.303 CC examples/util/zipf/zipf.o 00:04:41.303 CC test/thread/poller_perf/poller_perf.o 00:04:41.303 CC test/dma/test_dma/test_dma.o 00:04:41.303 LINK interrupt_tgt 00:04:41.303 LINK poller_perf 00:04:41.303 LINK zipf 00:04:41.303 LINK nvmf_tgt 00:04:41.303 LINK spdk_trace_record 00:04:41.303 LINK iscsi_tgt 00:04:41.303 LINK spdk_tgt 00:04:41.561 LINK ioat_perf 00:04:41.561 LINK spdk_trace 00:04:41.561 CC examples/ioat/verify/verify.o 00:04:41.561 CC app/spdk_lspci/spdk_lspci.o 00:04:41.561 CC app/spdk_nvme_perf/perf.o 00:04:41.561 CC app/spdk_nvme_identify/identify.o 00:04:41.561 CC app/spdk_nvme_discover/discovery_aer.o 00:04:41.820 CC app/spdk_top/spdk_top.o 00:04:41.820 LINK spdk_lspci 00:04:41.820 CC examples/thread/thread/thread_ex.o 00:04:41.820 CC app/spdk_dd/spdk_dd.o 00:04:41.820 LINK test_dma 00:04:41.820 LINK verify 00:04:41.820 CC examples/sock/hello_world/hello_sock.o 00:04:41.820 LINK spdk_nvme_discover 00:04:42.079 LINK thread 00:04:42.079 CC app/fio/nvme/fio_plugin.o 00:04:42.079 CC app/vhost/vhost.o 00:04:42.079 LINK hello_sock 00:04:42.079 LINK spdk_dd 00:04:42.338 CC examples/vmd/lsvmd/lsvmd.o 00:04:42.338 CC test/app/bdev_svc/bdev_svc.o 00:04:42.338 LINK vhost 00:04:42.338 LINK lsvmd 00:04:42.338 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:42.338 LINK bdev_svc 00:04:42.597 CC examples/idxd/perf/perf.o 00:04:42.597 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:42.597 LINK spdk_nvme_perf 00:04:42.597 CC examples/vmd/led/led.o 00:04:42.597 CC examples/accel/perf/accel_perf.o 00:04:42.597 LINK spdk_nvme_identify 00:04:42.597 LINK spdk_nvme 00:04:42.597 LINK spdk_top 00:04:42.856 LINK led 00:04:42.856 CC examples/blob/hello_world/hello_blob.o 00:04:42.856 LINK nvme_fuzz 00:04:42.856 LINK idxd_perf 00:04:42.856 LINK hello_fsdev 00:04:42.856 CC test/app/histogram_perf/histogram_perf.o 00:04:42.856 CC app/fio/bdev/fio_plugin.o 00:04:42.856 CC test/app/jsoncat/jsoncat.o 00:04:42.856 CC test/app/stub/stub.o 00:04:43.115 LINK histogram_perf 00:04:43.115 CC examples/blob/cli/blobcli.o 00:04:43.115 LINK hello_blob 00:04:43.115 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:43.115 LINK jsoncat 00:04:43.115 LINK stub 00:04:43.115 CC test/blobfs/mkfs/mkfs.o 00:04:43.115 CC examples/nvme/hello_world/hello_world.o 00:04:43.374 LINK accel_perf 00:04:43.374 CC examples/nvme/reconnect/reconnect.o 00:04:43.374 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:43.374 CC examples/nvme/arbitration/arbitration.o 00:04:43.374 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:43.374 LINK mkfs 00:04:43.374 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:43.374 LINK spdk_bdev 00:04:43.374 LINK hello_world 00:04:43.633 LINK blobcli 00:04:43.633 LINK reconnect 00:04:43.633 CC examples/nvme/hotplug/hotplug.o 00:04:43.633 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:43.633 CC examples/nvme/abort/abort.o 00:04:43.633 LINK arbitration 00:04:43.633 CC examples/bdev/hello_world/hello_bdev.o 00:04:43.893 LINK vhost_fuzz 00:04:43.893 LINK cmb_copy 00:04:43.893 CC examples/bdev/bdevperf/bdevperf.o 00:04:43.893 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:43.893 LINK hotplug 00:04:43.893 LINK nvme_manage 00:04:43.893 TEST_HEADER include/spdk/accel.h 00:04:43.893 TEST_HEADER include/spdk/accel_module.h 00:04:43.893 TEST_HEADER include/spdk/assert.h 00:04:43.893 TEST_HEADER include/spdk/barrier.h 00:04:43.893 TEST_HEADER include/spdk/base64.h 00:04:43.893 TEST_HEADER include/spdk/bdev.h 00:04:43.893 TEST_HEADER include/spdk/bdev_module.h 00:04:43.893 TEST_HEADER include/spdk/bdev_zone.h 00:04:43.893 TEST_HEADER include/spdk/bit_array.h 00:04:43.893 TEST_HEADER include/spdk/bit_pool.h 00:04:43.893 TEST_HEADER include/spdk/blob_bdev.h 00:04:43.893 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:43.893 LINK hello_bdev 00:04:43.893 TEST_HEADER include/spdk/blobfs.h 00:04:43.893 TEST_HEADER include/spdk/blob.h 00:04:43.893 TEST_HEADER include/spdk/conf.h 00:04:43.893 TEST_HEADER include/spdk/config.h 00:04:43.893 TEST_HEADER include/spdk/cpuset.h 00:04:43.893 TEST_HEADER include/spdk/crc16.h 00:04:43.893 TEST_HEADER include/spdk/crc32.h 00:04:43.893 TEST_HEADER include/spdk/crc64.h 00:04:44.153 TEST_HEADER include/spdk/dif.h 00:04:44.153 TEST_HEADER include/spdk/dma.h 00:04:44.153 TEST_HEADER include/spdk/endian.h 00:04:44.153 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.153 TEST_HEADER include/spdk/env.h 00:04:44.153 TEST_HEADER include/spdk/event.h 00:04:44.153 TEST_HEADER include/spdk/fd_group.h 00:04:44.153 TEST_HEADER include/spdk/fd.h 00:04:44.153 TEST_HEADER include/spdk/file.h 00:04:44.153 TEST_HEADER include/spdk/fsdev.h 00:04:44.153 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.153 TEST_HEADER include/spdk/ftl.h 00:04:44.153 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:44.153 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.153 TEST_HEADER include/spdk/hexlify.h 00:04:44.153 TEST_HEADER include/spdk/histogram_data.h 00:04:44.153 TEST_HEADER include/spdk/idxd.h 00:04:44.153 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.153 TEST_HEADER include/spdk/init.h 00:04:44.153 TEST_HEADER include/spdk/ioat.h 00:04:44.153 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.153 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.153 TEST_HEADER include/spdk/json.h 00:04:44.153 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.153 TEST_HEADER include/spdk/keyring.h 00:04:44.153 TEST_HEADER include/spdk/keyring_module.h 00:04:44.153 TEST_HEADER include/spdk/likely.h 00:04:44.153 TEST_HEADER include/spdk/log.h 00:04:44.153 TEST_HEADER include/spdk/lvol.h 00:04:44.153 TEST_HEADER include/spdk/md5.h 00:04:44.153 TEST_HEADER include/spdk/memory.h 00:04:44.153 TEST_HEADER include/spdk/mmio.h 00:04:44.153 TEST_HEADER include/spdk/nbd.h 00:04:44.153 LINK pmr_persistence 00:04:44.153 TEST_HEADER include/spdk/net.h 00:04:44.153 TEST_HEADER include/spdk/notify.h 00:04:44.153 TEST_HEADER include/spdk/nvme.h 00:04:44.153 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.153 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.153 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.153 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.153 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.153 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.153 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.153 TEST_HEADER include/spdk/nvmf.h 00:04:44.153 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.153 LINK abort 00:04:44.153 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.153 TEST_HEADER include/spdk/opal.h 00:04:44.153 TEST_HEADER include/spdk/opal_spec.h 00:04:44.153 TEST_HEADER include/spdk/pci_ids.h 00:04:44.153 TEST_HEADER include/spdk/pipe.h 00:04:44.153 TEST_HEADER include/spdk/queue.h 00:04:44.153 TEST_HEADER include/spdk/reduce.h 00:04:44.153 TEST_HEADER include/spdk/rpc.h 00:04:44.153 TEST_HEADER include/spdk/scheduler.h 00:04:44.153 TEST_HEADER include/spdk/scsi.h 00:04:44.153 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.153 TEST_HEADER include/spdk/sock.h 00:04:44.153 TEST_HEADER include/spdk/stdinc.h 00:04:44.153 TEST_HEADER include/spdk/string.h 00:04:44.153 TEST_HEADER include/spdk/thread.h 00:04:44.153 TEST_HEADER include/spdk/trace.h 00:04:44.153 TEST_HEADER include/spdk/trace_parser.h 00:04:44.153 TEST_HEADER include/spdk/tree.h 00:04:44.153 TEST_HEADER include/spdk/ublk.h 00:04:44.153 TEST_HEADER include/spdk/util.h 00:04:44.153 TEST_HEADER include/spdk/uuid.h 00:04:44.153 TEST_HEADER include/spdk/version.h 00:04:44.153 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.153 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.153 TEST_HEADER include/spdk/vhost.h 00:04:44.153 TEST_HEADER include/spdk/vmd.h 00:04:44.153 TEST_HEADER include/spdk/xor.h 00:04:44.153 TEST_HEADER include/spdk/zipf.h 00:04:44.153 CXX test/cpp_headers/accel.o 00:04:44.153 CC test/event/event_perf/event_perf.o 00:04:44.153 CC test/nvme/aer/aer.o 00:04:44.412 CC test/nvme/reset/reset.o 00:04:44.412 CC test/env/mem_callbacks/mem_callbacks.o 00:04:44.412 CC test/lvol/esnap/esnap.o 00:04:44.412 CC test/env/vtophys/vtophys.o 00:04:44.412 CXX test/cpp_headers/accel_module.o 00:04:44.412 CC test/rpc_client/rpc_client_test.o 00:04:44.412 LINK event_perf 00:04:44.412 LINK vtophys 00:04:44.412 CXX test/cpp_headers/assert.o 00:04:44.671 LINK reset 00:04:44.671 LINK rpc_client_test 00:04:44.671 LINK aer 00:04:44.671 CC test/event/reactor/reactor.o 00:04:44.671 CXX test/cpp_headers/barrier.o 00:04:44.671 CXX test/cpp_headers/base64.o 00:04:44.671 CXX test/cpp_headers/bdev.o 00:04:44.671 CC test/event/reactor_perf/reactor_perf.o 00:04:44.671 LINK reactor 00:04:44.931 LINK bdevperf 00:04:44.931 CC test/nvme/sgl/sgl.o 00:04:44.931 LINK mem_callbacks 00:04:44.931 LINK reactor_perf 00:04:44.931 CXX test/cpp_headers/bdev_module.o 00:04:44.931 CXX test/cpp_headers/bdev_zone.o 00:04:44.931 CC test/event/app_repeat/app_repeat.o 00:04:44.931 LINK iscsi_fuzz 00:04:44.931 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.931 CC test/accel/dif/dif.o 00:04:45.190 CXX test/cpp_headers/bit_array.o 00:04:45.190 LINK app_repeat 00:04:45.190 LINK sgl 00:04:45.190 CC test/env/memory/memory_ut.o 00:04:45.190 LINK env_dpdk_post_init 00:04:45.190 CC test/event/scheduler/scheduler.o 00:04:45.190 CC examples/nvmf/nvmf/nvmf.o 00:04:45.190 CXX test/cpp_headers/bit_pool.o 00:04:45.449 CC test/nvme/e2edp/nvme_dp.o 00:04:45.449 CC test/nvme/overhead/overhead.o 00:04:45.449 CC test/nvme/err_injection/err_injection.o 00:04:45.449 CXX test/cpp_headers/blob_bdev.o 00:04:45.449 CC test/nvme/startup/startup.o 00:04:45.449 LINK scheduler 00:04:45.449 LINK nvmf 00:04:45.449 LINK err_injection 00:04:45.449 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.708 LINK startup 00:04:45.708 LINK nvme_dp 00:04:45.708 LINK overhead 00:04:45.708 CC test/nvme/reserve/reserve.o 00:04:45.708 CXX test/cpp_headers/blobfs.o 00:04:45.708 LINK dif 00:04:45.708 CC test/env/pci/pci_ut.o 00:04:45.708 CC test/nvme/simple_copy/simple_copy.o 00:04:45.708 CC test/nvme/boot_partition/boot_partition.o 00:04:45.708 CC test/nvme/connect_stress/connect_stress.o 00:04:45.968 CC test/nvme/compliance/nvme_compliance.o 00:04:45.968 CXX test/cpp_headers/blob.o 00:04:45.968 LINK reserve 00:04:45.968 CXX test/cpp_headers/conf.o 00:04:45.968 LINK connect_stress 00:04:45.968 LINK boot_partition 00:04:45.968 LINK simple_copy 00:04:46.235 CXX test/cpp_headers/config.o 00:04:46.235 CXX test/cpp_headers/cpuset.o 00:04:46.235 LINK pci_ut 00:04:46.235 CC test/nvme/fused_ordering/fused_ordering.o 00:04:46.235 LINK nvme_compliance 00:04:46.235 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:46.235 CC test/nvme/fdp/fdp.o 00:04:46.235 CC test/bdev/bdevio/bdevio.o 00:04:46.235 LINK memory_ut 00:04:46.235 CC test/nvme/cuse/cuse.o 00:04:46.235 CXX test/cpp_headers/crc16.o 00:04:46.509 CXX test/cpp_headers/crc32.o 00:04:46.509 LINK fused_ordering 00:04:46.509 LINK doorbell_aers 00:04:46.509 CXX test/cpp_headers/crc64.o 00:04:46.509 CXX test/cpp_headers/dif.o 00:04:46.509 CXX test/cpp_headers/dma.o 00:04:46.509 CXX test/cpp_headers/endian.o 00:04:46.509 CXX test/cpp_headers/env_dpdk.o 00:04:46.509 CXX test/cpp_headers/env.o 00:04:46.509 CXX test/cpp_headers/event.o 00:04:46.509 LINK fdp 00:04:46.509 CXX test/cpp_headers/fd_group.o 00:04:46.509 CXX test/cpp_headers/fd.o 00:04:46.768 LINK bdevio 00:04:46.768 CXX test/cpp_headers/file.o 00:04:46.768 CXX test/cpp_headers/fsdev.o 00:04:46.768 CXX test/cpp_headers/fsdev_module.o 00:04:46.768 CXX test/cpp_headers/ftl.o 00:04:46.768 CXX test/cpp_headers/fuse_dispatcher.o 00:04:46.768 CXX test/cpp_headers/gpt_spec.o 00:04:46.768 CXX test/cpp_headers/hexlify.o 00:04:46.768 CXX test/cpp_headers/histogram_data.o 00:04:46.768 CXX test/cpp_headers/idxd.o 00:04:46.768 CXX test/cpp_headers/idxd_spec.o 00:04:46.768 CXX test/cpp_headers/init.o 00:04:46.768 CXX test/cpp_headers/ioat.o 00:04:46.768 CXX test/cpp_headers/ioat_spec.o 00:04:46.768 CXX test/cpp_headers/iscsi_spec.o 00:04:47.028 CXX test/cpp_headers/json.o 00:04:47.028 CXX test/cpp_headers/jsonrpc.o 00:04:47.028 CXX test/cpp_headers/keyring.o 00:04:47.028 CXX test/cpp_headers/keyring_module.o 00:04:47.028 CXX test/cpp_headers/likely.o 00:04:47.028 CXX test/cpp_headers/log.o 00:04:47.028 CXX test/cpp_headers/lvol.o 00:04:47.028 CXX test/cpp_headers/md5.o 00:04:47.028 CXX test/cpp_headers/memory.o 00:04:47.028 CXX test/cpp_headers/mmio.o 00:04:47.028 CXX test/cpp_headers/nbd.o 00:04:47.028 CXX test/cpp_headers/net.o 00:04:47.028 CXX test/cpp_headers/notify.o 00:04:47.287 CXX test/cpp_headers/nvme.o 00:04:47.287 CXX test/cpp_headers/nvme_intel.o 00:04:47.287 CXX test/cpp_headers/nvme_ocssd.o 00:04:47.287 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:47.287 CXX test/cpp_headers/nvme_spec.o 00:04:47.287 CXX test/cpp_headers/nvme_zns.o 00:04:47.287 CXX test/cpp_headers/nvmf_cmd.o 00:04:47.287 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:47.287 CXX test/cpp_headers/nvmf.o 00:04:47.287 CXX test/cpp_headers/nvmf_spec.o 00:04:47.287 CXX test/cpp_headers/nvmf_transport.o 00:04:47.287 CXX test/cpp_headers/opal.o 00:04:47.287 CXX test/cpp_headers/opal_spec.o 00:04:47.287 CXX test/cpp_headers/pci_ids.o 00:04:47.547 CXX test/cpp_headers/pipe.o 00:04:47.547 CXX test/cpp_headers/queue.o 00:04:47.547 CXX test/cpp_headers/reduce.o 00:04:47.547 CXX test/cpp_headers/rpc.o 00:04:47.547 CXX test/cpp_headers/scheduler.o 00:04:47.547 CXX test/cpp_headers/scsi.o 00:04:47.547 CXX test/cpp_headers/scsi_spec.o 00:04:47.547 CXX test/cpp_headers/sock.o 00:04:47.547 CXX test/cpp_headers/stdinc.o 00:04:47.547 LINK cuse 00:04:47.547 CXX test/cpp_headers/string.o 00:04:47.547 CXX test/cpp_headers/thread.o 00:04:47.547 CXX test/cpp_headers/trace.o 00:04:47.547 CXX test/cpp_headers/trace_parser.o 00:04:47.547 CXX test/cpp_headers/tree.o 00:04:47.547 CXX test/cpp_headers/ublk.o 00:04:47.547 CXX test/cpp_headers/util.o 00:04:47.806 CXX test/cpp_headers/uuid.o 00:04:47.806 CXX test/cpp_headers/version.o 00:04:47.806 CXX test/cpp_headers/vfio_user_pci.o 00:04:47.806 CXX test/cpp_headers/vfio_user_spec.o 00:04:47.806 CXX test/cpp_headers/vhost.o 00:04:47.806 CXX test/cpp_headers/vmd.o 00:04:47.806 CXX test/cpp_headers/xor.o 00:04:47.806 CXX test/cpp_headers/zipf.o 00:04:50.341 LINK esnap 00:04:50.341 00:04:50.341 real 1m19.322s 00:04:50.341 user 6m0.389s 00:04:50.341 sys 1m6.755s 00:04:50.341 15:20:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:50.341 15:20:48 make -- common/autotest_common.sh@10 -- $ set +x 00:04:50.341 ************************************ 00:04:50.341 END TEST make 00:04:50.341 ************************************ 00:04:50.341 15:20:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:50.341 15:20:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:50.341 15:20:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:50.341 15:20:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.341 15:20:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:50.342 15:20:48 -- pm/common@44 -- $ pid=6206 00:04:50.342 15:20:48 -- pm/common@50 -- $ kill -TERM 6206 00:04:50.342 15:20:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.342 15:20:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:50.342 15:20:48 -- pm/common@44 -- $ pid=6208 00:04:50.342 15:20:48 -- pm/common@50 -- $ kill -TERM 6208 00:04:50.342 15:20:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:50.342 15:20:48 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:50.601 15:20:48 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.601 15:20:48 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.601 15:20:48 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.601 15:20:48 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.601 15:20:48 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.601 15:20:48 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.601 15:20:48 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.601 15:20:48 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.601 15:20:48 -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.601 15:20:48 -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.601 15:20:48 -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.601 15:20:48 -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.601 15:20:48 -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.601 15:20:48 -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.601 15:20:48 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.601 15:20:48 -- scripts/common.sh@344 -- # case "$op" in 00:04:50.601 15:20:48 -- scripts/common.sh@345 -- # : 1 00:04:50.601 15:20:48 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.601 15:20:48 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.601 15:20:48 -- scripts/common.sh@365 -- # decimal 1 00:04:50.601 15:20:48 -- scripts/common.sh@353 -- # local d=1 00:04:50.601 15:20:48 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.601 15:20:48 -- scripts/common.sh@355 -- # echo 1 00:04:50.601 15:20:48 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.601 15:20:48 -- scripts/common.sh@366 -- # decimal 2 00:04:50.601 15:20:48 -- scripts/common.sh@353 -- # local d=2 00:04:50.601 15:20:48 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.601 15:20:48 -- scripts/common.sh@355 -- # echo 2 00:04:50.601 15:20:48 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.601 15:20:48 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.601 15:20:48 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.601 15:20:48 -- scripts/common.sh@368 -- # return 0 00:04:50.601 15:20:48 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.601 15:20:48 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.601 --rc genhtml_branch_coverage=1 00:04:50.601 --rc genhtml_function_coverage=1 00:04:50.601 --rc genhtml_legend=1 00:04:50.601 --rc geninfo_all_blocks=1 00:04:50.601 --rc geninfo_unexecuted_blocks=1 00:04:50.601 00:04:50.601 ' 00:04:50.601 15:20:48 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.601 --rc genhtml_branch_coverage=1 00:04:50.601 --rc genhtml_function_coverage=1 00:04:50.601 --rc genhtml_legend=1 00:04:50.601 --rc geninfo_all_blocks=1 00:04:50.601 --rc geninfo_unexecuted_blocks=1 00:04:50.601 00:04:50.601 ' 00:04:50.602 15:20:48 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.602 --rc genhtml_branch_coverage=1 00:04:50.602 --rc genhtml_function_coverage=1 00:04:50.602 --rc genhtml_legend=1 00:04:50.602 --rc geninfo_all_blocks=1 00:04:50.602 --rc geninfo_unexecuted_blocks=1 00:04:50.602 00:04:50.602 ' 00:04:50.602 15:20:48 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.602 --rc genhtml_branch_coverage=1 00:04:50.602 --rc genhtml_function_coverage=1 00:04:50.602 --rc genhtml_legend=1 00:04:50.602 --rc geninfo_all_blocks=1 00:04:50.602 --rc geninfo_unexecuted_blocks=1 00:04:50.602 00:04:50.602 ' 00:04:50.602 15:20:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.602 15:20:48 -- nvmf/common.sh@7 -- # uname -s 00:04:50.602 15:20:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.602 15:20:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.602 15:20:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.602 15:20:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.602 15:20:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.602 15:20:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.602 15:20:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.602 15:20:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.602 15:20:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.602 15:20:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.602 15:20:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:642ac8ad-f34e-486b-a948-772d46b362cb 00:04:50.602 15:20:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=642ac8ad-f34e-486b-a948-772d46b362cb 00:04:50.602 15:20:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.602 15:20:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.602 15:20:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.602 15:20:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.602 15:20:48 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.602 15:20:48 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.602 15:20:48 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.602 15:20:48 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.602 15:20:48 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.602 15:20:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.602 15:20:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.602 15:20:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.602 15:20:48 -- paths/export.sh@5 -- # export PATH 00:04:50.602 15:20:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.602 15:20:48 -- nvmf/common.sh@51 -- # : 0 00:04:50.602 15:20:48 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.602 15:20:48 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.602 15:20:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.602 15:20:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.602 15:20:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.602 15:20:48 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.602 15:20:48 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.602 15:20:48 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.602 15:20:48 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.602 15:20:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:50.602 15:20:48 -- spdk/autotest.sh@32 -- # uname -s 00:04:50.602 15:20:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:50.602 15:20:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:50.602 15:20:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:50.602 15:20:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:50.602 15:20:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:50.602 15:20:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:50.602 15:20:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:50.602 15:20:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:50.602 15:20:49 -- spdk/autotest.sh@48 -- # udevadm_pid=68360 00:04:50.602 15:20:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:50.602 15:20:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:50.602 15:20:49 -- pm/common@17 -- # local monitor 00:04:50.602 15:20:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.602 15:20:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:50.602 15:20:49 -- pm/common@21 -- # date +%s 00:04:50.602 15:20:49 -- pm/common@25 -- # sleep 1 00:04:50.602 15:20:49 -- pm/common@21 -- # date +%s 00:04:50.602 15:20:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732634449 00:04:50.602 15:20:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732634449 00:04:50.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732634449_collect-vmstat.pm.log 00:04:50.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732634449_collect-cpu-load.pm.log 00:04:51.797 15:20:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:51.797 15:20:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:51.797 15:20:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.797 15:20:50 -- common/autotest_common.sh@10 -- # set +x 00:04:51.797 15:20:50 -- spdk/autotest.sh@59 -- # create_test_list 00:04:51.797 15:20:50 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:51.797 15:20:50 -- common/autotest_common.sh@10 -- # set +x 00:04:51.797 15:20:50 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:51.797 15:20:50 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:51.797 15:20:50 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:51.797 15:20:50 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:51.797 15:20:50 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:51.797 15:20:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:51.797 15:20:50 -- common/autotest_common.sh@1457 -- # uname 00:04:51.797 15:20:50 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:51.797 15:20:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:51.797 15:20:50 -- common/autotest_common.sh@1477 -- # uname 00:04:51.797 15:20:50 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:51.797 15:20:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:51.797 15:20:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:51.797 lcov: LCOV version 1.15 00:04:51.797 15:20:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:06.793 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:06.793 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:21.707 15:21:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:21.707 15:21:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.707 15:21:18 -- common/autotest_common.sh@10 -- # set +x 00:05:21.707 15:21:18 -- spdk/autotest.sh@78 -- # rm -f 00:05:21.707 15:21:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.707 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:21.707 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:21.707 15:21:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:21.707 15:21:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:21.707 15:21:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:21.707 15:21:19 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:21.707 15:21:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:21.707 15:21:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:21.707 15:21:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:21.707 15:21:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.707 15:21:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:21.707 15:21:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:21.707 15:21:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:21.707 15:21:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:21.707 15:21:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:21.707 15:21:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:21.707 15:21:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:21.707 15:21:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:21.707 15:21:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:21.707 15:21:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:21.707 15:21:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:21.707 15:21:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:21.707 15:21:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:21.707 15:21:19 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:21.707 15:21:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:21.707 15:21:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:21.707 15:21:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:21.707 15:21:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:21.707 15:21:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:21.707 15:21:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:21.707 15:21:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:21.707 15:21:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:21.707 No valid GPT data, bailing 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # pt= 00:05:21.707 15:21:19 -- scripts/common.sh@395 -- # return 1 00:05:21.707 15:21:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:21.707 1+0 records in 00:05:21.707 1+0 records out 00:05:21.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647524 s, 162 MB/s 00:05:21.707 15:21:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:21.707 15:21:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:21.707 15:21:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:21.707 15:21:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:21.707 15:21:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:21.707 No valid GPT data, bailing 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # pt= 00:05:21.707 15:21:19 -- scripts/common.sh@395 -- # return 1 00:05:21.707 15:21:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:21.707 1+0 records in 00:05:21.707 1+0 records out 00:05:21.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717282 s, 146 MB/s 00:05:21.707 15:21:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:21.707 15:21:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:21.707 15:21:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:21.707 15:21:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:21.707 15:21:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:21.707 No valid GPT data, bailing 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # pt= 00:05:21.707 15:21:19 -- scripts/common.sh@395 -- # return 1 00:05:21.707 15:21:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:21.707 1+0 records in 00:05:21.707 1+0 records out 00:05:21.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681528 s, 154 MB/s 00:05:21.707 15:21:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:21.707 15:21:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:21.707 15:21:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:21.707 15:21:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:21.707 15:21:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:21.707 No valid GPT data, bailing 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:21.707 15:21:19 -- scripts/common.sh@394 -- # pt= 00:05:21.707 15:21:19 -- scripts/common.sh@395 -- # return 1 00:05:21.707 15:21:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:21.707 1+0 records in 00:05:21.707 1+0 records out 00:05:21.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684045 s, 153 MB/s 00:05:21.707 15:21:19 -- spdk/autotest.sh@105 -- # sync 00:05:21.707 15:21:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:21.707 15:21:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:21.707 15:21:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:24.996 15:21:22 -- spdk/autotest.sh@111 -- # uname -s 00:05:24.996 15:21:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:24.996 15:21:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:24.996 15:21:22 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:25.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.255 Hugepages 00:05:25.255 node hugesize free / total 00:05:25.515 node0 1048576kB 0 / 0 00:05:25.515 node0 2048kB 0 / 0 00:05:25.515 00:05:25.515 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.515 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:25.515 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:25.774 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:25.774 15:21:24 -- spdk/autotest.sh@117 -- # uname -s 00:05:25.774 15:21:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:25.774 15:21:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:25.774 15:21:24 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.711 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.711 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.711 15:21:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:28.089 15:21:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:28.089 15:21:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:28.089 15:21:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.089 15:21:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:28.089 15:21:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:28.089 15:21:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:28.089 15:21:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.089 15:21:26 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:28.089 15:21:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:28.089 15:21:26 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:28.089 15:21:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:28.089 15:21:26 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.348 Waiting for block devices as requested 00:05:28.609 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:28.609 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:28.609 15:21:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:28.609 15:21:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:28.609 15:21:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:28.609 15:21:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:28.609 15:21:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:28.609 15:21:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:28.609 15:21:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:28.609 15:21:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:28.609 15:21:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:28.609 15:21:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:28.609 15:21:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:28.609 15:21:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:28.609 15:21:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:28.609 15:21:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:28.609 15:21:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:28.609 15:21:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:28.609 15:21:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:28.610 15:21:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:28.610 15:21:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:28.869 15:21:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:28.869 15:21:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:28.869 15:21:27 -- common/autotest_common.sh@1543 -- # continue 00:05:28.869 15:21:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:28.869 15:21:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:28.869 15:21:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:28.869 15:21:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:28.869 15:21:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:28.869 15:21:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:28.869 15:21:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:28.869 15:21:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:28.869 15:21:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:28.869 15:21:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:28.869 15:21:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:28.869 15:21:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:28.869 15:21:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:28.869 15:21:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:28.869 15:21:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:28.869 15:21:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:28.869 15:21:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:28.869 15:21:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:28.869 15:21:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:28.869 15:21:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:28.869 15:21:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:28.869 15:21:27 -- common/autotest_common.sh@1543 -- # continue 00:05:28.869 15:21:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:28.869 15:21:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.869 15:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:28.869 15:21:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:28.869 15:21:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.869 15:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:28.869 15:21:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.813 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.813 15:21:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:29.813 15:21:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.813 15:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:29.813 15:21:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:29.813 15:21:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:29.813 15:21:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.813 15:21:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:29.813 15:21:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:29.813 15:21:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:29.813 15:21:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:29.813 15:21:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:29.813 15:21:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:29.813 15:21:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:29.813 15:21:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.813 15:21:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.813 15:21:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:30.073 15:21:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:30.073 15:21:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:30.073 15:21:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:30.073 15:21:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:30.073 15:21:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:30.073 15:21:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.073 15:21:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:30.073 15:21:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:30.073 15:21:28 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:30.073 15:21:28 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:30.073 15:21:28 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:30.073 15:21:28 -- common/autotest_common.sh@1572 -- # return 0 00:05:30.073 15:21:28 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:30.073 15:21:28 -- common/autotest_common.sh@1580 -- # return 0 00:05:30.073 15:21:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:30.073 15:21:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:30.073 15:21:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.073 15:21:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:30.073 15:21:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:30.073 15:21:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.073 15:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.073 15:21:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:30.073 15:21:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:30.073 15:21:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.073 15:21:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.073 15:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.073 ************************************ 00:05:30.073 START TEST env 00:05:30.073 ************************************ 00:05:30.073 15:21:28 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:30.073 * Looking for test storage... 00:05:30.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:30.073 15:21:28 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.073 15:21:28 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.073 15:21:28 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.333 15:21:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.333 15:21:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.333 15:21:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.333 15:21:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.333 15:21:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.333 15:21:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.333 15:21:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.333 15:21:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.333 15:21:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.333 15:21:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.333 15:21:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.333 15:21:28 env -- scripts/common.sh@344 -- # case "$op" in 00:05:30.333 15:21:28 env -- scripts/common.sh@345 -- # : 1 00:05:30.333 15:21:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.333 15:21:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.333 15:21:28 env -- scripts/common.sh@365 -- # decimal 1 00:05:30.333 15:21:28 env -- scripts/common.sh@353 -- # local d=1 00:05:30.333 15:21:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.333 15:21:28 env -- scripts/common.sh@355 -- # echo 1 00:05:30.333 15:21:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.333 15:21:28 env -- scripts/common.sh@366 -- # decimal 2 00:05:30.333 15:21:28 env -- scripts/common.sh@353 -- # local d=2 00:05:30.333 15:21:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.333 15:21:28 env -- scripts/common.sh@355 -- # echo 2 00:05:30.333 15:21:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.333 15:21:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.333 15:21:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.333 15:21:28 env -- scripts/common.sh@368 -- # return 0 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.333 --rc genhtml_branch_coverage=1 00:05:30.333 --rc genhtml_function_coverage=1 00:05:30.333 --rc genhtml_legend=1 00:05:30.333 --rc geninfo_all_blocks=1 00:05:30.333 --rc geninfo_unexecuted_blocks=1 00:05:30.333 00:05:30.333 ' 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.333 --rc genhtml_branch_coverage=1 00:05:30.333 --rc genhtml_function_coverage=1 00:05:30.333 --rc genhtml_legend=1 00:05:30.333 --rc geninfo_all_blocks=1 00:05:30.333 --rc geninfo_unexecuted_blocks=1 00:05:30.333 00:05:30.333 ' 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.333 --rc genhtml_branch_coverage=1 00:05:30.333 --rc genhtml_function_coverage=1 00:05:30.333 --rc genhtml_legend=1 00:05:30.333 --rc geninfo_all_blocks=1 00:05:30.333 --rc geninfo_unexecuted_blocks=1 00:05:30.333 00:05:30.333 ' 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.333 --rc genhtml_branch_coverage=1 00:05:30.333 --rc genhtml_function_coverage=1 00:05:30.333 --rc genhtml_legend=1 00:05:30.333 --rc geninfo_all_blocks=1 00:05:30.333 --rc geninfo_unexecuted_blocks=1 00:05:30.333 00:05:30.333 ' 00:05:30.333 15:21:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.333 15:21:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.333 15:21:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.333 ************************************ 00:05:30.333 START TEST env_memory 00:05:30.333 ************************************ 00:05:30.333 15:21:28 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:30.333 00:05:30.333 00:05:30.333 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.333 http://cunit.sourceforge.net/ 00:05:30.333 00:05:30.333 00:05:30.333 Suite: memory 00:05:30.334 Test: alloc and free memory map ...[2024-11-26 15:21:28.681752] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.334 passed 00:05:30.334 Test: mem map translation ...[2024-11-26 15:21:28.727866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.334 [2024-11-26 15:21:28.727988] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.334 [2024-11-26 15:21:28.728100] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.334 [2024-11-26 15:21:28.728255] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.334 passed 00:05:30.334 Test: mem map registration ...[2024-11-26 15:21:28.794822] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:30.334 [2024-11-26 15:21:28.794966] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:30.593 passed 00:05:30.593 Test: mem map adjacent registrations ...passed 00:05:30.593 00:05:30.593 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.593 suites 1 1 n/a 0 0 00:05:30.593 tests 4 4 4 0 0 00:05:30.593 asserts 152 152 152 0 n/a 00:05:30.593 00:05:30.593 Elapsed time = 0.250 seconds 00:05:30.593 00:05:30.593 real 0m0.293s 00:05:30.593 user 0m0.257s 00:05:30.593 sys 0m0.027s 00:05:30.593 15:21:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.593 15:21:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:30.593 ************************************ 00:05:30.593 END TEST env_memory 00:05:30.593 ************************************ 00:05:30.593 15:21:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:30.593 15:21:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.593 15:21:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.593 15:21:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.593 ************************************ 00:05:30.593 START TEST env_vtophys 00:05:30.593 ************************************ 00:05:30.593 15:21:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:30.593 EAL: lib.eal log level changed from notice to debug 00:05:30.593 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 1 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 2 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 3 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 4 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 5 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 6 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 7 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 8 as core 0 on socket 0 00:05:30.593 EAL: Detected lcore 9 as core 0 on socket 0 00:05:30.593 EAL: Maximum logical cores by configuration: 128 00:05:30.593 EAL: Detected CPU lcores: 10 00:05:30.593 EAL: Detected NUMA nodes: 1 00:05:30.593 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:30.593 EAL: Detected shared linkage of DPDK 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:30.594 EAL: Registered [vdev] bus. 00:05:30.594 EAL: bus.vdev log level changed from disabled to notice 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:30.594 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:30.594 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:05:30.594 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:05:30.594 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.594 EAL: No shared files mode enabled, IPC is disabled 00:05:30.594 EAL: Selected IOVA mode 'PA' 00:05:30.594 EAL: Probing VFIO support... 00:05:30.594 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:30.594 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:30.594 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.594 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.594 EAL: Setting up physically contiguous memory... 00:05:30.594 EAL: Setting maximum number of open files to 524288 00:05:30.594 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.594 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.594 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.594 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.594 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.594 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.594 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.594 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.594 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.594 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.594 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.594 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.594 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.594 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.594 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.594 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.594 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.594 EAL: Hugepages will be freed exactly as allocated. 00:05:30.594 EAL: No shared files mode enabled, IPC is disabled 00:05:30.594 EAL: No shared files mode enabled, IPC is disabled 00:05:30.853 EAL: TSC frequency is ~2294600 KHz 00:05:30.853 EAL: Main lcore 0 is ready (tid=7f335abf2a40;cpuset=[0]) 00:05:30.853 EAL: Trying to obtain current memory policy. 00:05:30.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.853 EAL: Restoring previous memory policy: 0 00:05:30.853 EAL: request: mp_malloc_sync 00:05:30.853 EAL: No shared files mode enabled, IPC is disabled 00:05:30.853 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.853 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:05:30.853 EAL: No shared files mode enabled, IPC is disabled 00:05:30.853 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.853 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:30.853 00:05:30.853 00:05:30.853 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.853 http://cunit.sourceforge.net/ 00:05:30.853 00:05:30.853 00:05:30.853 Suite: components_suite 00:05:31.112 Test: vtophys_malloc_test ...passed 00:05:31.112 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:31.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.112 EAL: Restoring previous memory policy: 4 00:05:31.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.112 EAL: request: mp_malloc_sync 00:05:31.112 EAL: No shared files mode enabled, IPC is disabled 00:05:31.112 EAL: Heap on socket 0 was expanded by 4MB 00:05:31.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.112 EAL: request: mp_malloc_sync 00:05:31.112 EAL: No shared files mode enabled, IPC is disabled 00:05:31.112 EAL: Heap on socket 0 was shrunk by 4MB 00:05:31.112 EAL: Trying to obtain current memory policy. 00:05:31.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.112 EAL: Restoring previous memory policy: 4 00:05:31.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.112 EAL: request: mp_malloc_sync 00:05:31.112 EAL: No shared files mode enabled, IPC is disabled 00:05:31.112 EAL: Heap on socket 0 was expanded by 6MB 00:05:31.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.112 EAL: request: mp_malloc_sync 00:05:31.112 EAL: No shared files mode enabled, IPC is disabled 00:05:31.112 EAL: Heap on socket 0 was shrunk by 6MB 00:05:31.112 EAL: Trying to obtain current memory policy. 00:05:31.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.112 EAL: Restoring previous memory policy: 4 00:05:31.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.112 EAL: request: mp_malloc_sync 00:05:31.112 EAL: No shared files mode enabled, IPC is disabled 00:05:31.112 EAL: Heap on socket 0 was expanded by 10MB 00:05:31.112 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.112 EAL: request: mp_malloc_sync 00:05:31.112 EAL: No shared files mode enabled, IPC is disabled 00:05:31.113 EAL: Heap on socket 0 was shrunk by 10MB 00:05:31.113 EAL: Trying to obtain current memory policy. 00:05:31.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.113 EAL: Restoring previous memory policy: 4 00:05:31.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.113 EAL: request: mp_malloc_sync 00:05:31.113 EAL: No shared files mode enabled, IPC is disabled 00:05:31.113 EAL: Heap on socket 0 was expanded by 18MB 00:05:31.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.113 EAL: request: mp_malloc_sync 00:05:31.113 EAL: No shared files mode enabled, IPC is disabled 00:05:31.113 EAL: Heap on socket 0 was shrunk by 18MB 00:05:31.113 EAL: Trying to obtain current memory policy. 00:05:31.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.113 EAL: Restoring previous memory policy: 4 00:05:31.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.113 EAL: request: mp_malloc_sync 00:05:31.113 EAL: No shared files mode enabled, IPC is disabled 00:05:31.113 EAL: Heap on socket 0 was expanded by 34MB 00:05:31.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.113 EAL: request: mp_malloc_sync 00:05:31.113 EAL: No shared files mode enabled, IPC is disabled 00:05:31.113 EAL: Heap on socket 0 was shrunk by 34MB 00:05:31.113 EAL: Trying to obtain current memory policy. 00:05:31.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.372 EAL: Restoring previous memory policy: 4 00:05:31.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was expanded by 66MB 00:05:31.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was shrunk by 66MB 00:05:31.372 EAL: Trying to obtain current memory policy. 00:05:31.372 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.372 EAL: Restoring previous memory policy: 4 00:05:31.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was expanded by 130MB 00:05:31.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was shrunk by 130MB 00:05:31.372 EAL: Trying to obtain current memory policy. 00:05:31.372 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.372 EAL: Restoring previous memory policy: 4 00:05:31.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was expanded by 258MB 00:05:31.372 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.372 EAL: request: mp_malloc_sync 00:05:31.372 EAL: No shared files mode enabled, IPC is disabled 00:05:31.372 EAL: Heap on socket 0 was shrunk by 258MB 00:05:31.372 EAL: Trying to obtain current memory policy. 00:05:31.372 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.631 EAL: Restoring previous memory policy: 4 00:05:31.631 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.631 EAL: request: mp_malloc_sync 00:05:31.631 EAL: No shared files mode enabled, IPC is disabled 00:05:31.631 EAL: Heap on socket 0 was expanded by 514MB 00:05:31.631 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.631 EAL: request: mp_malloc_sync 00:05:31.631 EAL: No shared files mode enabled, IPC is disabled 00:05:31.631 EAL: Heap on socket 0 was shrunk by 514MB 00:05:31.631 EAL: Trying to obtain current memory policy. 00:05:31.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.890 EAL: Restoring previous memory policy: 4 00:05:31.890 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.890 EAL: request: mp_malloc_sync 00:05:31.890 EAL: No shared files mode enabled, IPC is disabled 00:05:31.890 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.149 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.149 EAL: request: mp_malloc_sync 00:05:32.149 EAL: No shared files mode enabled, IPC is disabled 00:05:32.149 passed 00:05:32.149 00:05:32.149 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.149 suites 1 1 n/a 0 0 00:05:32.149 tests 2 2 2 0 0 00:05:32.149 asserts 5316 5316 5316 0 n/a 00:05:32.149 00:05:32.149 Elapsed time = 1.360 seconds 00:05:32.149 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.149 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.149 EAL: request: mp_malloc_sync 00:05:32.149 EAL: No shared files mode enabled, IPC is disabled 00:05:32.149 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.149 EAL: No shared files mode enabled, IPC is disabled 00:05:32.149 EAL: No shared files mode enabled, IPC is disabled 00:05:32.149 EAL: No shared files mode enabled, IPC is disabled 00:05:32.408 00:05:32.408 real 0m1.650s 00:05:32.408 user 0m0.781s 00:05:32.408 sys 0m0.732s 00:05:32.408 15:21:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.408 15:21:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:32.408 ************************************ 00:05:32.408 END TEST env_vtophys 00:05:32.408 ************************************ 00:05:32.408 15:21:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.408 15:21:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.408 15:21:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.408 15:21:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.408 ************************************ 00:05:32.408 START TEST env_pci 00:05:32.408 ************************************ 00:05:32.408 15:21:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:32.408 00:05:32.408 00:05:32.408 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.408 http://cunit.sourceforge.net/ 00:05:32.408 00:05:32.408 00:05:32.408 Suite: pci 00:05:32.408 Test: pci_hook ...[2024-11-26 15:21:30.725119] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70590 has claimed it 00:05:32.408 passed 00:05:32.408 00:05:32.408 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.408 suites 1 1 n/a 0 0 00:05:32.408 tests 1 1 1 0 0 00:05:32.408 asserts 25 25 25 0 n/a 00:05:32.408 00:05:32.408 Elapsed time = 0.007 seconds 00:05:32.408 EAL: Cannot find device (10000:00:01.0) 00:05:32.408 EAL: Failed to attach device on primary process 00:05:32.408 ************************************ 00:05:32.408 END TEST env_pci 00:05:32.408 ************************************ 00:05:32.408 00:05:32.408 real 0m0.114s 00:05:32.408 user 0m0.064s 00:05:32.408 sys 0m0.049s 00:05:32.408 15:21:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.408 15:21:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:32.408 15:21:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:32.408 15:21:30 env -- env/env.sh@15 -- # uname 00:05:32.408 15:21:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:32.408 15:21:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:32.408 15:21:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.408 15:21:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:32.408 15:21:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.408 15:21:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.408 ************************************ 00:05:32.408 START TEST env_dpdk_post_init 00:05:32.408 ************************************ 00:05:32.408 15:21:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.667 EAL: Detected CPU lcores: 10 00:05:32.667 EAL: Detected NUMA nodes: 1 00:05:32.667 EAL: Detected shared linkage of DPDK 00:05:32.667 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.667 EAL: Selected IOVA mode 'PA' 00:05:32.667 Starting DPDK initialization... 00:05:32.667 Starting SPDK post initialization... 00:05:32.667 SPDK NVMe probe 00:05:32.667 Attaching to 0000:00:10.0 00:05:32.667 Attaching to 0000:00:11.0 00:05:32.667 Attached to 0000:00:10.0 00:05:32.667 Attached to 0000:00:11.0 00:05:32.667 Cleaning up... 00:05:32.926 00:05:32.926 real 0m0.276s 00:05:32.926 user 0m0.081s 00:05:32.926 sys 0m0.096s 00:05:32.926 15:21:31 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.926 ************************************ 00:05:32.926 END TEST env_dpdk_post_init 00:05:32.926 ************************************ 00:05:32.926 15:21:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.926 15:21:31 env -- env/env.sh@26 -- # uname 00:05:32.926 15:21:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:32.926 15:21:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.926 15:21:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.926 15:21:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.926 15:21:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.926 ************************************ 00:05:32.926 START TEST env_mem_callbacks 00:05:32.926 ************************************ 00:05:32.926 15:21:31 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.926 EAL: Detected CPU lcores: 10 00:05:32.926 EAL: Detected NUMA nodes: 1 00:05:32.926 EAL: Detected shared linkage of DPDK 00:05:32.926 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.926 EAL: Selected IOVA mode 'PA' 00:05:32.926 00:05:32.926 00:05:32.926 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.926 http://cunit.sourceforge.net/ 00:05:32.926 00:05:32.926 00:05:32.926 Suite: memory 00:05:32.926 Test: test ... 00:05:32.926 register 0x200000200000 2097152 00:05:32.926 malloc 3145728 00:05:32.926 register 0x200000400000 4194304 00:05:32.926 buf 0x200000500000 len 3145728 PASSED 00:05:32.926 malloc 64 00:05:32.926 buf 0x2000004fff40 len 64 PASSED 00:05:32.926 malloc 4194304 00:05:32.926 register 0x200000800000 6291456 00:05:32.926 buf 0x200000a00000 len 4194304 PASSED 00:05:32.926 free 0x200000500000 3145728 00:05:33.185 free 0x2000004fff40 64 00:05:33.185 unregister 0x200000400000 4194304 PASSED 00:05:33.185 free 0x200000a00000 4194304 00:05:33.185 unregister 0x200000800000 6291456 PASSED 00:05:33.185 malloc 8388608 00:05:33.185 register 0x200000400000 10485760 00:05:33.185 buf 0x200000600000 len 8388608 PASSED 00:05:33.185 free 0x200000600000 8388608 00:05:33.185 unregister 0x200000400000 10485760 PASSED 00:05:33.185 passed 00:05:33.185 00:05:33.185 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.185 suites 1 1 n/a 0 0 00:05:33.185 tests 1 1 1 0 0 00:05:33.185 asserts 15 15 15 0 n/a 00:05:33.185 00:05:33.185 Elapsed time = 0.013 seconds 00:05:33.185 00:05:33.186 real 0m0.218s 00:05:33.186 user 0m0.038s 00:05:33.186 sys 0m0.077s 00:05:33.186 15:21:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.186 ************************************ 00:05:33.186 END TEST env_mem_callbacks 00:05:33.186 15:21:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:33.186 ************************************ 00:05:33.186 00:05:33.186 real 0m3.122s 00:05:33.186 user 0m1.450s 00:05:33.186 sys 0m1.344s 00:05:33.186 15:21:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.186 15:21:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.186 ************************************ 00:05:33.186 END TEST env 00:05:33.186 ************************************ 00:05:33.186 15:21:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.186 15:21:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.186 15:21:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.186 15:21:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.186 ************************************ 00:05:33.186 START TEST rpc 00:05:33.186 ************************************ 00:05:33.186 15:21:31 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:33.444 * Looking for test storage... 00:05:33.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:33.444 15:21:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.444 15:21:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.444 15:21:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.445 15:21:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.445 15:21:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.445 15:21:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.445 15:21:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.445 15:21:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.445 15:21:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.445 15:21:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.445 15:21:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.445 15:21:31 rpc -- scripts/common.sh@345 -- # : 1 00:05:33.445 15:21:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.445 15:21:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.445 15:21:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.445 15:21:31 rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.445 15:21:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.445 15:21:31 rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.445 15:21:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.445 15:21:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.445 15:21:31 rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.445 15:21:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.445 15:21:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.445 15:21:31 rpc -- scripts/common.sh@368 -- # return 0 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.445 --rc genhtml_branch_coverage=1 00:05:33.445 --rc genhtml_function_coverage=1 00:05:33.445 --rc genhtml_legend=1 00:05:33.445 --rc geninfo_all_blocks=1 00:05:33.445 --rc geninfo_unexecuted_blocks=1 00:05:33.445 00:05:33.445 ' 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.445 --rc genhtml_branch_coverage=1 00:05:33.445 --rc genhtml_function_coverage=1 00:05:33.445 --rc genhtml_legend=1 00:05:33.445 --rc geninfo_all_blocks=1 00:05:33.445 --rc geninfo_unexecuted_blocks=1 00:05:33.445 00:05:33.445 ' 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.445 --rc genhtml_branch_coverage=1 00:05:33.445 --rc genhtml_function_coverage=1 00:05:33.445 --rc genhtml_legend=1 00:05:33.445 --rc geninfo_all_blocks=1 00:05:33.445 --rc geninfo_unexecuted_blocks=1 00:05:33.445 00:05:33.445 ' 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.445 --rc genhtml_branch_coverage=1 00:05:33.445 --rc genhtml_function_coverage=1 00:05:33.445 --rc genhtml_legend=1 00:05:33.445 --rc geninfo_all_blocks=1 00:05:33.445 --rc geninfo_unexecuted_blocks=1 00:05:33.445 00:05:33.445 ' 00:05:33.445 15:21:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70717 00:05:33.445 15:21:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:33.445 15:21:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.445 15:21:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70717 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 70717 ']' 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.445 15:21:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.445 [2024-11-26 15:21:31.886930] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:33.445 [2024-11-26 15:21:31.887075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70717 ] 00:05:33.704 [2024-11-26 15:21:32.026659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.704 [2024-11-26 15:21:32.063064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.704 [2024-11-26 15:21:32.091681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:33.704 [2024-11-26 15:21:32.091763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70717' to capture a snapshot of events at runtime. 00:05:33.704 [2024-11-26 15:21:32.091782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:33.704 [2024-11-26 15:21:32.091793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:33.704 [2024-11-26 15:21:32.091801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70717 for offline analysis/debug. 00:05:33.704 [2024-11-26 15:21:32.092219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.272 15:21:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.272 15:21:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.272 15:21:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.272 15:21:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.272 15:21:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:34.272 15:21:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:34.272 15:21:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.272 15:21:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.272 15:21:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.272 ************************************ 00:05:34.272 START TEST rpc_integrity 00:05:34.272 ************************************ 00:05:34.272 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:34.272 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.272 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.272 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.272 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.272 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.272 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.531 { 00:05:34.531 "name": "Malloc0", 00:05:34.531 "aliases": [ 00:05:34.531 "ccde65a5-e7d0-40eb-b0b5-b78101ebeeae" 00:05:34.531 ], 00:05:34.531 "product_name": "Malloc disk", 00:05:34.531 "block_size": 512, 00:05:34.531 "num_blocks": 16384, 00:05:34.531 "uuid": "ccde65a5-e7d0-40eb-b0b5-b78101ebeeae", 00:05:34.531 "assigned_rate_limits": { 00:05:34.531 "rw_ios_per_sec": 0, 00:05:34.531 "rw_mbytes_per_sec": 0, 00:05:34.531 "r_mbytes_per_sec": 0, 00:05:34.531 "w_mbytes_per_sec": 0 00:05:34.531 }, 00:05:34.531 "claimed": false, 00:05:34.531 "zoned": false, 00:05:34.531 "supported_io_types": { 00:05:34.531 "read": true, 00:05:34.531 "write": true, 00:05:34.531 "unmap": true, 00:05:34.531 "flush": true, 00:05:34.531 "reset": true, 00:05:34.531 "nvme_admin": false, 00:05:34.531 "nvme_io": false, 00:05:34.531 "nvme_io_md": false, 00:05:34.531 "write_zeroes": true, 00:05:34.531 "zcopy": true, 00:05:34.531 "get_zone_info": false, 00:05:34.531 "zone_management": false, 00:05:34.531 "zone_append": false, 00:05:34.531 "compare": false, 00:05:34.531 "compare_and_write": false, 00:05:34.531 "abort": true, 00:05:34.531 "seek_hole": false, 00:05:34.531 "seek_data": false, 00:05:34.531 "copy": true, 00:05:34.531 "nvme_iov_md": false 00:05:34.531 }, 00:05:34.531 "memory_domains": [ 00:05:34.531 { 00:05:34.531 "dma_device_id": "system", 00:05:34.531 "dma_device_type": 1 00:05:34.531 }, 00:05:34.531 { 00:05:34.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.531 "dma_device_type": 2 00:05:34.531 } 00:05:34.531 ], 00:05:34.531 "driver_specific": {} 00:05:34.531 } 00:05:34.531 ]' 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.531 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.531 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.531 [2024-11-26 15:21:32.870463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:34.531 [2024-11-26 15:21:32.870548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.531 [2024-11-26 15:21:32.870586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:34.531 [2024-11-26 15:21:32.870609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.531 [2024-11-26 15:21:32.873410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.531 [2024-11-26 15:21:32.873449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.531 Passthru0 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.532 { 00:05:34.532 "name": "Malloc0", 00:05:34.532 "aliases": [ 00:05:34.532 "ccde65a5-e7d0-40eb-b0b5-b78101ebeeae" 00:05:34.532 ], 00:05:34.532 "product_name": "Malloc disk", 00:05:34.532 "block_size": 512, 00:05:34.532 "num_blocks": 16384, 00:05:34.532 "uuid": "ccde65a5-e7d0-40eb-b0b5-b78101ebeeae", 00:05:34.532 "assigned_rate_limits": { 00:05:34.532 "rw_ios_per_sec": 0, 00:05:34.532 "rw_mbytes_per_sec": 0, 00:05:34.532 "r_mbytes_per_sec": 0, 00:05:34.532 "w_mbytes_per_sec": 0 00:05:34.532 }, 00:05:34.532 "claimed": true, 00:05:34.532 "claim_type": "exclusive_write", 00:05:34.532 "zoned": false, 00:05:34.532 "supported_io_types": { 00:05:34.532 "read": true, 00:05:34.532 "write": true, 00:05:34.532 "unmap": true, 00:05:34.532 "flush": true, 00:05:34.532 "reset": true, 00:05:34.532 "nvme_admin": false, 00:05:34.532 "nvme_io": false, 00:05:34.532 "nvme_io_md": false, 00:05:34.532 "write_zeroes": true, 00:05:34.532 "zcopy": true, 00:05:34.532 "get_zone_info": false, 00:05:34.532 "zone_management": false, 00:05:34.532 "zone_append": false, 00:05:34.532 "compare": false, 00:05:34.532 "compare_and_write": false, 00:05:34.532 "abort": true, 00:05:34.532 "seek_hole": false, 00:05:34.532 "seek_data": false, 00:05:34.532 "copy": true, 00:05:34.532 "nvme_iov_md": false 00:05:34.532 }, 00:05:34.532 "memory_domains": [ 00:05:34.532 { 00:05:34.532 "dma_device_id": "system", 00:05:34.532 "dma_device_type": 1 00:05:34.532 }, 00:05:34.532 { 00:05:34.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.532 "dma_device_type": 2 00:05:34.532 } 00:05:34.532 ], 00:05:34.532 "driver_specific": {} 00:05:34.532 }, 00:05:34.532 { 00:05:34.532 "name": "Passthru0", 00:05:34.532 "aliases": [ 00:05:34.532 "cfbe1ba5-7990-57a4-8d8a-52d6075e73b6" 00:05:34.532 ], 00:05:34.532 "product_name": "passthru", 00:05:34.532 "block_size": 512, 00:05:34.532 "num_blocks": 16384, 00:05:34.532 "uuid": "cfbe1ba5-7990-57a4-8d8a-52d6075e73b6", 00:05:34.532 "assigned_rate_limits": { 00:05:34.532 "rw_ios_per_sec": 0, 00:05:34.532 "rw_mbytes_per_sec": 0, 00:05:34.532 "r_mbytes_per_sec": 0, 00:05:34.532 "w_mbytes_per_sec": 0 00:05:34.532 }, 00:05:34.532 "claimed": false, 00:05:34.532 "zoned": false, 00:05:34.532 "supported_io_types": { 00:05:34.532 "read": true, 00:05:34.532 "write": true, 00:05:34.532 "unmap": true, 00:05:34.532 "flush": true, 00:05:34.532 "reset": true, 00:05:34.532 "nvme_admin": false, 00:05:34.532 "nvme_io": false, 00:05:34.532 "nvme_io_md": false, 00:05:34.532 "write_zeroes": true, 00:05:34.532 "zcopy": true, 00:05:34.532 "get_zone_info": false, 00:05:34.532 "zone_management": false, 00:05:34.532 "zone_append": false, 00:05:34.532 "compare": false, 00:05:34.532 "compare_and_write": false, 00:05:34.532 "abort": true, 00:05:34.532 "seek_hole": false, 00:05:34.532 "seek_data": false, 00:05:34.532 "copy": true, 00:05:34.532 "nvme_iov_md": false 00:05:34.532 }, 00:05:34.532 "memory_domains": [ 00:05:34.532 { 00:05:34.532 "dma_device_id": "system", 00:05:34.532 "dma_device_type": 1 00:05:34.532 }, 00:05:34.532 { 00:05:34.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.532 "dma_device_type": 2 00:05:34.532 } 00:05:34.532 ], 00:05:34.532 "driver_specific": { 00:05:34.532 "passthru": { 00:05:34.532 "name": "Passthru0", 00:05:34.532 "base_bdev_name": "Malloc0" 00:05:34.532 } 00:05:34.532 } 00:05:34.532 } 00:05:34.532 ]' 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.532 15:21:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.532 15:21:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.791 15:21:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.791 00:05:34.791 real 0m0.319s 00:05:34.791 user 0m0.189s 00:05:34.791 sys 0m0.058s 00:05:34.791 15:21:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.791 15:21:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.791 ************************************ 00:05:34.791 END TEST rpc_integrity 00:05:34.791 ************************************ 00:05:34.791 15:21:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:34.791 15:21:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.791 15:21:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.791 15:21:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.791 ************************************ 00:05:34.791 START TEST rpc_plugins 00:05:34.791 ************************************ 00:05:34.791 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:34.791 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:34.791 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.791 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.791 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.791 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:34.791 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:34.791 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.791 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.791 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.791 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:34.791 { 00:05:34.791 "name": "Malloc1", 00:05:34.791 "aliases": [ 00:05:34.791 "0387dad4-765a-4d02-88c2-a0be7f17d2f6" 00:05:34.791 ], 00:05:34.791 "product_name": "Malloc disk", 00:05:34.791 "block_size": 4096, 00:05:34.791 "num_blocks": 256, 00:05:34.791 "uuid": "0387dad4-765a-4d02-88c2-a0be7f17d2f6", 00:05:34.791 "assigned_rate_limits": { 00:05:34.791 "rw_ios_per_sec": 0, 00:05:34.791 "rw_mbytes_per_sec": 0, 00:05:34.791 "r_mbytes_per_sec": 0, 00:05:34.791 "w_mbytes_per_sec": 0 00:05:34.791 }, 00:05:34.791 "claimed": false, 00:05:34.791 "zoned": false, 00:05:34.791 "supported_io_types": { 00:05:34.791 "read": true, 00:05:34.791 "write": true, 00:05:34.792 "unmap": true, 00:05:34.792 "flush": true, 00:05:34.792 "reset": true, 00:05:34.792 "nvme_admin": false, 00:05:34.792 "nvme_io": false, 00:05:34.792 "nvme_io_md": false, 00:05:34.792 "write_zeroes": true, 00:05:34.792 "zcopy": true, 00:05:34.792 "get_zone_info": false, 00:05:34.792 "zone_management": false, 00:05:34.792 "zone_append": false, 00:05:34.792 "compare": false, 00:05:34.792 "compare_and_write": false, 00:05:34.792 "abort": true, 00:05:34.792 "seek_hole": false, 00:05:34.792 "seek_data": false, 00:05:34.792 "copy": true, 00:05:34.792 "nvme_iov_md": false 00:05:34.792 }, 00:05:34.792 "memory_domains": [ 00:05:34.792 { 00:05:34.792 "dma_device_id": "system", 00:05:34.792 "dma_device_type": 1 00:05:34.792 }, 00:05:34.792 { 00:05:34.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.792 "dma_device_type": 2 00:05:34.792 } 00:05:34.792 ], 00:05:34.792 "driver_specific": {} 00:05:34.792 } 00:05:34.792 ]' 00:05:34.792 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:34.792 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:34.792 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.792 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.792 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:34.792 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:34.792 15:21:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:34.792 00:05:34.792 real 0m0.149s 00:05:34.792 user 0m0.076s 00:05:34.792 sys 0m0.033s 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.792 15:21:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:34.792 ************************************ 00:05:34.792 END TEST rpc_plugins 00:05:34.792 ************************************ 00:05:35.052 15:21:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.052 15:21:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.052 15:21:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.052 15:21:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.052 ************************************ 00:05:35.052 START TEST rpc_trace_cmd_test 00:05:35.052 ************************************ 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:35.052 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70717", 00:05:35.052 "tpoint_group_mask": "0x8", 00:05:35.052 "iscsi_conn": { 00:05:35.052 "mask": "0x2", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "scsi": { 00:05:35.052 "mask": "0x4", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "bdev": { 00:05:35.052 "mask": "0x8", 00:05:35.052 "tpoint_mask": "0xffffffffffffffff" 00:05:35.052 }, 00:05:35.052 "nvmf_rdma": { 00:05:35.052 "mask": "0x10", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "nvmf_tcp": { 00:05:35.052 "mask": "0x20", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "ftl": { 00:05:35.052 "mask": "0x40", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "blobfs": { 00:05:35.052 "mask": "0x80", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "dsa": { 00:05:35.052 "mask": "0x200", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "thread": { 00:05:35.052 "mask": "0x400", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "nvme_pcie": { 00:05:35.052 "mask": "0x800", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "iaa": { 00:05:35.052 "mask": "0x1000", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "nvme_tcp": { 00:05:35.052 "mask": "0x2000", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "bdev_nvme": { 00:05:35.052 "mask": "0x4000", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "sock": { 00:05:35.052 "mask": "0x8000", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "blob": { 00:05:35.052 "mask": "0x10000", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "bdev_raid": { 00:05:35.052 "mask": "0x20000", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 }, 00:05:35.052 "scheduler": { 00:05:35.052 "mask": "0x40000", 00:05:35.052 "tpoint_mask": "0x0" 00:05:35.052 } 00:05:35.052 }' 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.052 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.312 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.312 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:35.312 15:21:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:35.312 00:05:35.312 real 0m0.272s 00:05:35.312 user 0m0.227s 00:05:35.312 sys 0m0.034s 00:05:35.312 15:21:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.312 15:21:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.312 ************************************ 00:05:35.312 END TEST rpc_trace_cmd_test 00:05:35.312 ************************************ 00:05:35.312 15:21:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:35.312 15:21:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:35.312 15:21:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:35.312 15:21:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.312 15:21:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.312 15:21:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.312 ************************************ 00:05:35.312 START TEST rpc_daemon_integrity 00:05:35.312 ************************************ 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.312 { 00:05:35.312 "name": "Malloc2", 00:05:35.312 "aliases": [ 00:05:35.312 "f6f7fdc7-83e8-4c7d-bf8b-9acf7b15f972" 00:05:35.312 ], 00:05:35.312 "product_name": "Malloc disk", 00:05:35.312 "block_size": 512, 00:05:35.312 "num_blocks": 16384, 00:05:35.312 "uuid": "f6f7fdc7-83e8-4c7d-bf8b-9acf7b15f972", 00:05:35.312 "assigned_rate_limits": { 00:05:35.312 "rw_ios_per_sec": 0, 00:05:35.312 "rw_mbytes_per_sec": 0, 00:05:35.312 "r_mbytes_per_sec": 0, 00:05:35.312 "w_mbytes_per_sec": 0 00:05:35.312 }, 00:05:35.312 "claimed": false, 00:05:35.312 "zoned": false, 00:05:35.312 "supported_io_types": { 00:05:35.312 "read": true, 00:05:35.312 "write": true, 00:05:35.312 "unmap": true, 00:05:35.312 "flush": true, 00:05:35.312 "reset": true, 00:05:35.312 "nvme_admin": false, 00:05:35.312 "nvme_io": false, 00:05:35.312 "nvme_io_md": false, 00:05:35.312 "write_zeroes": true, 00:05:35.312 "zcopy": true, 00:05:35.312 "get_zone_info": false, 00:05:35.312 "zone_management": false, 00:05:35.312 "zone_append": false, 00:05:35.312 "compare": false, 00:05:35.312 "compare_and_write": false, 00:05:35.312 "abort": true, 00:05:35.312 "seek_hole": false, 00:05:35.312 "seek_data": false, 00:05:35.312 "copy": true, 00:05:35.312 "nvme_iov_md": false 00:05:35.312 }, 00:05:35.312 "memory_domains": [ 00:05:35.312 { 00:05:35.312 "dma_device_id": "system", 00:05:35.312 "dma_device_type": 1 00:05:35.312 }, 00:05:35.312 { 00:05:35.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.312 "dma_device_type": 2 00:05:35.312 } 00:05:35.312 ], 00:05:35.312 "driver_specific": {} 00:05:35.312 } 00:05:35.312 ]' 00:05:35.312 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.572 [2024-11-26 15:21:33.803438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:35.572 [2024-11-26 15:21:33.803514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.572 [2024-11-26 15:21:33.803537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:35.572 [2024-11-26 15:21:33.803549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.572 [2024-11-26 15:21:33.806073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.572 [2024-11-26 15:21:33.806117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.572 Passthru0 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.572 { 00:05:35.572 "name": "Malloc2", 00:05:35.572 "aliases": [ 00:05:35.572 "f6f7fdc7-83e8-4c7d-bf8b-9acf7b15f972" 00:05:35.572 ], 00:05:35.572 "product_name": "Malloc disk", 00:05:35.572 "block_size": 512, 00:05:35.572 "num_blocks": 16384, 00:05:35.572 "uuid": "f6f7fdc7-83e8-4c7d-bf8b-9acf7b15f972", 00:05:35.572 "assigned_rate_limits": { 00:05:35.572 "rw_ios_per_sec": 0, 00:05:35.572 "rw_mbytes_per_sec": 0, 00:05:35.572 "r_mbytes_per_sec": 0, 00:05:35.572 "w_mbytes_per_sec": 0 00:05:35.572 }, 00:05:35.572 "claimed": true, 00:05:35.572 "claim_type": "exclusive_write", 00:05:35.572 "zoned": false, 00:05:35.572 "supported_io_types": { 00:05:35.572 "read": true, 00:05:35.572 "write": true, 00:05:35.572 "unmap": true, 00:05:35.572 "flush": true, 00:05:35.572 "reset": true, 00:05:35.572 "nvme_admin": false, 00:05:35.572 "nvme_io": false, 00:05:35.572 "nvme_io_md": false, 00:05:35.572 "write_zeroes": true, 00:05:35.572 "zcopy": true, 00:05:35.572 "get_zone_info": false, 00:05:35.572 "zone_management": false, 00:05:35.572 "zone_append": false, 00:05:35.572 "compare": false, 00:05:35.572 "compare_and_write": false, 00:05:35.572 "abort": true, 00:05:35.572 "seek_hole": false, 00:05:35.572 "seek_data": false, 00:05:35.572 "copy": true, 00:05:35.572 "nvme_iov_md": false 00:05:35.572 }, 00:05:35.572 "memory_domains": [ 00:05:35.572 { 00:05:35.572 "dma_device_id": "system", 00:05:35.572 "dma_device_type": 1 00:05:35.572 }, 00:05:35.572 { 00:05:35.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.572 "dma_device_type": 2 00:05:35.572 } 00:05:35.572 ], 00:05:35.572 "driver_specific": {} 00:05:35.572 }, 00:05:35.572 { 00:05:35.572 "name": "Passthru0", 00:05:35.572 "aliases": [ 00:05:35.572 "9cc0c714-9ab6-567b-a574-90d32709410e" 00:05:35.572 ], 00:05:35.572 "product_name": "passthru", 00:05:35.572 "block_size": 512, 00:05:35.572 "num_blocks": 16384, 00:05:35.572 "uuid": "9cc0c714-9ab6-567b-a574-90d32709410e", 00:05:35.572 "assigned_rate_limits": { 00:05:35.572 "rw_ios_per_sec": 0, 00:05:35.572 "rw_mbytes_per_sec": 0, 00:05:35.572 "r_mbytes_per_sec": 0, 00:05:35.572 "w_mbytes_per_sec": 0 00:05:35.572 }, 00:05:35.572 "claimed": false, 00:05:35.572 "zoned": false, 00:05:35.572 "supported_io_types": { 00:05:35.572 "read": true, 00:05:35.572 "write": true, 00:05:35.572 "unmap": true, 00:05:35.572 "flush": true, 00:05:35.572 "reset": true, 00:05:35.572 "nvme_admin": false, 00:05:35.572 "nvme_io": false, 00:05:35.572 "nvme_io_md": false, 00:05:35.572 "write_zeroes": true, 00:05:35.572 "zcopy": true, 00:05:35.572 "get_zone_info": false, 00:05:35.572 "zone_management": false, 00:05:35.572 "zone_append": false, 00:05:35.572 "compare": false, 00:05:35.572 "compare_and_write": false, 00:05:35.572 "abort": true, 00:05:35.572 "seek_hole": false, 00:05:35.572 "seek_data": false, 00:05:35.572 "copy": true, 00:05:35.572 "nvme_iov_md": false 00:05:35.572 }, 00:05:35.572 "memory_domains": [ 00:05:35.572 { 00:05:35.572 "dma_device_id": "system", 00:05:35.572 "dma_device_type": 1 00:05:35.572 }, 00:05:35.572 { 00:05:35.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.572 "dma_device_type": 2 00:05:35.572 } 00:05:35.572 ], 00:05:35.572 "driver_specific": { 00:05:35.572 "passthru": { 00:05:35.572 "name": "Passthru0", 00:05:35.572 "base_bdev_name": "Malloc2" 00:05:35.572 } 00:05:35.572 } 00:05:35.572 } 00:05:35.572 ]' 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.572 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.573 15:21:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.573 00:05:35.573 real 0m0.319s 00:05:35.573 user 0m0.196s 00:05:35.573 sys 0m0.052s 00:05:35.573 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.573 15:21:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.573 ************************************ 00:05:35.573 END TEST rpc_daemon_integrity 00:05:35.573 ************************************ 00:05:35.573 15:21:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:35.573 15:21:34 rpc -- rpc/rpc.sh@84 -- # killprocess 70717 00:05:35.573 15:21:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 70717 ']' 00:05:35.573 15:21:34 rpc -- common/autotest_common.sh@958 -- # kill -0 70717 00:05:35.573 15:21:34 rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.573 15:21:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.573 15:21:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70717 00:05:35.832 15:21:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.832 15:21:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.832 killing process with pid 70717 00:05:35.832 15:21:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70717' 00:05:35.832 15:21:34 rpc -- common/autotest_common.sh@973 -- # kill 70717 00:05:35.832 15:21:34 rpc -- common/autotest_common.sh@978 -- # wait 70717 00:05:36.091 00:05:36.091 real 0m2.873s 00:05:36.091 user 0m3.483s 00:05:36.091 sys 0m0.860s 00:05:36.091 15:21:34 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.091 15:21:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.091 ************************************ 00:05:36.091 END TEST rpc 00:05:36.091 ************************************ 00:05:36.091 15:21:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:36.091 15:21:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.091 15:21:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.091 15:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:36.091 ************************************ 00:05:36.091 START TEST skip_rpc 00:05:36.091 ************************************ 00:05:36.092 15:21:34 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:36.351 * Looking for test storage... 00:05:36.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.351 15:21:34 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.351 --rc genhtml_branch_coverage=1 00:05:36.351 --rc genhtml_function_coverage=1 00:05:36.351 --rc genhtml_legend=1 00:05:36.351 --rc geninfo_all_blocks=1 00:05:36.351 --rc geninfo_unexecuted_blocks=1 00:05:36.351 00:05:36.351 ' 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.351 --rc genhtml_branch_coverage=1 00:05:36.351 --rc genhtml_function_coverage=1 00:05:36.351 --rc genhtml_legend=1 00:05:36.351 --rc geninfo_all_blocks=1 00:05:36.351 --rc geninfo_unexecuted_blocks=1 00:05:36.351 00:05:36.351 ' 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.351 --rc genhtml_branch_coverage=1 00:05:36.351 --rc genhtml_function_coverage=1 00:05:36.351 --rc genhtml_legend=1 00:05:36.351 --rc geninfo_all_blocks=1 00:05:36.351 --rc geninfo_unexecuted_blocks=1 00:05:36.351 00:05:36.351 ' 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.351 --rc genhtml_branch_coverage=1 00:05:36.351 --rc genhtml_function_coverage=1 00:05:36.351 --rc genhtml_legend=1 00:05:36.351 --rc geninfo_all_blocks=1 00:05:36.351 --rc geninfo_unexecuted_blocks=1 00:05:36.351 00:05:36.351 ' 00:05:36.351 15:21:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:36.351 15:21:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:36.351 15:21:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.351 15:21:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.351 ************************************ 00:05:36.351 START TEST skip_rpc 00:05:36.351 ************************************ 00:05:36.351 15:21:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:36.351 15:21:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:36.351 15:21:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70924 00:05:36.351 15:21:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.351 15:21:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:36.611 [2024-11-26 15:21:34.838494] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:36.611 [2024-11-26 15:21:34.838621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70924 ] 00:05:36.611 [2024-11-26 15:21:34.974378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:36.611 [2024-11-26 15:21:35.011932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.611 [2024-11-26 15:21:35.040590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.890 15:21:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:41.890 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:41.890 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:41.890 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70924 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70924 ']' 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70924 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70924 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.891 killing process with pid 70924 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70924' 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70924 00:05:41.891 15:21:39 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70924 00:05:41.891 00:05:41.891 real 0m5.427s 00:05:41.891 user 0m5.011s 00:05:41.891 sys 0m0.342s 00:05:41.891 15:21:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.891 15:21:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 ************************************ 00:05:41.891 END TEST skip_rpc 00:05:41.891 ************************************ 00:05:41.891 15:21:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:41.891 15:21:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.891 15:21:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.891 15:21:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 ************************************ 00:05:41.891 START TEST skip_rpc_with_json 00:05:41.891 ************************************ 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71010 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71010 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71010 ']' 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.891 15:21:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.891 [2024-11-26 15:21:40.323543] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:41.891 [2024-11-26 15:21:40.323701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71010 ] 00:05:42.151 [2024-11-26 15:21:40.461445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:42.151 [2024-11-26 15:21:40.498900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.151 [2024-11-26 15:21:40.527380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.721 [2024-11-26 15:21:41.163232] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.721 request: 00:05:42.721 { 00:05:42.721 "trtype": "tcp", 00:05:42.721 "method": "nvmf_get_transports", 00:05:42.721 "req_id": 1 00:05:42.721 } 00:05:42.721 Got JSON-RPC error response 00:05:42.721 response: 00:05:42.721 { 00:05:42.721 "code": -19, 00:05:42.721 "message": "No such device" 00:05:42.721 } 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.721 [2024-11-26 15:21:41.175374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.721 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.981 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.981 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.981 { 00:05:42.981 "subsystems": [ 00:05:42.981 { 00:05:42.981 "subsystem": "fsdev", 00:05:42.981 "config": [ 00:05:42.981 { 00:05:42.981 "method": "fsdev_set_opts", 00:05:42.981 "params": { 00:05:42.981 "fsdev_io_pool_size": 65535, 00:05:42.981 "fsdev_io_cache_size": 256 00:05:42.981 } 00:05:42.981 } 00:05:42.981 ] 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "subsystem": "keyring", 00:05:42.981 "config": [] 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "subsystem": "iobuf", 00:05:42.981 "config": [ 00:05:42.981 { 00:05:42.981 "method": "iobuf_set_options", 00:05:42.981 "params": { 00:05:42.981 "small_pool_count": 8192, 00:05:42.981 "large_pool_count": 1024, 00:05:42.981 "small_bufsize": 8192, 00:05:42.981 "large_bufsize": 135168, 00:05:42.981 "enable_numa": false 00:05:42.981 } 00:05:42.981 } 00:05:42.981 ] 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "subsystem": "sock", 00:05:42.981 "config": [ 00:05:42.981 { 00:05:42.981 "method": "sock_set_default_impl", 00:05:42.981 "params": { 00:05:42.981 "impl_name": "posix" 00:05:42.981 } 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "method": "sock_impl_set_options", 00:05:42.981 "params": { 00:05:42.981 "impl_name": "ssl", 00:05:42.981 "recv_buf_size": 4096, 00:05:42.981 "send_buf_size": 4096, 00:05:42.981 "enable_recv_pipe": true, 00:05:42.981 "enable_quickack": false, 00:05:42.981 "enable_placement_id": 0, 00:05:42.981 "enable_zerocopy_send_server": true, 00:05:42.981 "enable_zerocopy_send_client": false, 00:05:42.981 "zerocopy_threshold": 0, 00:05:42.981 "tls_version": 0, 00:05:42.981 "enable_ktls": false 00:05:42.981 } 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "method": "sock_impl_set_options", 00:05:42.981 "params": { 00:05:42.981 "impl_name": "posix", 00:05:42.981 "recv_buf_size": 2097152, 00:05:42.981 "send_buf_size": 2097152, 00:05:42.981 "enable_recv_pipe": true, 00:05:42.981 "enable_quickack": false, 00:05:42.981 "enable_placement_id": 0, 00:05:42.981 "enable_zerocopy_send_server": true, 00:05:42.981 "enable_zerocopy_send_client": false, 00:05:42.981 "zerocopy_threshold": 0, 00:05:42.981 "tls_version": 0, 00:05:42.981 "enable_ktls": false 00:05:42.981 } 00:05:42.981 } 00:05:42.981 ] 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "subsystem": "vmd", 00:05:42.981 "config": [] 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "subsystem": "accel", 00:05:42.981 "config": [ 00:05:42.981 { 00:05:42.981 "method": "accel_set_options", 00:05:42.981 "params": { 00:05:42.981 "small_cache_size": 128, 00:05:42.981 "large_cache_size": 16, 00:05:42.981 "task_count": 2048, 00:05:42.981 "sequence_count": 2048, 00:05:42.981 "buf_count": 2048 00:05:42.981 } 00:05:42.981 } 00:05:42.981 ] 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "subsystem": "bdev", 00:05:42.981 "config": [ 00:05:42.981 { 00:05:42.981 "method": "bdev_set_options", 00:05:42.981 "params": { 00:05:42.981 "bdev_io_pool_size": 65535, 00:05:42.981 "bdev_io_cache_size": 256, 00:05:42.981 "bdev_auto_examine": true, 00:05:42.981 "iobuf_small_cache_size": 128, 00:05:42.981 "iobuf_large_cache_size": 16 00:05:42.981 } 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "method": "bdev_raid_set_options", 00:05:42.981 "params": { 00:05:42.981 "process_window_size_kb": 1024, 00:05:42.981 "process_max_bandwidth_mb_sec": 0 00:05:42.981 } 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "method": "bdev_iscsi_set_options", 00:05:42.981 "params": { 00:05:42.981 "timeout_sec": 30 00:05:42.981 } 00:05:42.981 }, 00:05:42.981 { 00:05:42.981 "method": "bdev_nvme_set_options", 00:05:42.981 "params": { 00:05:42.982 "action_on_timeout": "none", 00:05:42.982 "timeout_us": 0, 00:05:42.982 "timeout_admin_us": 0, 00:05:42.982 "keep_alive_timeout_ms": 10000, 00:05:42.982 "arbitration_burst": 0, 00:05:42.982 "low_priority_weight": 0, 00:05:42.982 "medium_priority_weight": 0, 00:05:42.982 "high_priority_weight": 0, 00:05:42.982 "nvme_adminq_poll_period_us": 10000, 00:05:42.982 "nvme_ioq_poll_period_us": 0, 00:05:42.982 "io_queue_requests": 0, 00:05:42.982 "delay_cmd_submit": true, 00:05:42.982 "transport_retry_count": 4, 00:05:42.982 "bdev_retry_count": 3, 00:05:42.982 "transport_ack_timeout": 0, 00:05:42.982 "ctrlr_loss_timeout_sec": 0, 00:05:42.982 "reconnect_delay_sec": 0, 00:05:42.982 "fast_io_fail_timeout_sec": 0, 00:05:42.982 "disable_auto_failback": false, 00:05:42.982 "generate_uuids": false, 00:05:42.982 "transport_tos": 0, 00:05:42.982 "nvme_error_stat": false, 00:05:42.982 "rdma_srq_size": 0, 00:05:42.982 "io_path_stat": false, 00:05:42.982 "allow_accel_sequence": false, 00:05:42.982 "rdma_max_cq_size": 0, 00:05:42.982 "rdma_cm_event_timeout_ms": 0, 00:05:42.982 "dhchap_digests": [ 00:05:42.982 "sha256", 00:05:42.982 "sha384", 00:05:42.982 "sha512" 00:05:42.982 ], 00:05:42.982 "dhchap_dhgroups": [ 00:05:42.982 "null", 00:05:42.982 "ffdhe2048", 00:05:42.982 "ffdhe3072", 00:05:42.982 "ffdhe4096", 00:05:42.982 "ffdhe6144", 00:05:42.982 "ffdhe8192" 00:05:42.982 ] 00:05:42.982 } 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "method": "bdev_nvme_set_hotplug", 00:05:42.982 "params": { 00:05:42.982 "period_us": 100000, 00:05:42.982 "enable": false 00:05:42.982 } 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "method": "bdev_wait_for_examine" 00:05:42.982 } 00:05:42.982 ] 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "scsi", 00:05:42.982 "config": null 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "scheduler", 00:05:42.982 "config": [ 00:05:42.982 { 00:05:42.982 "method": "framework_set_scheduler", 00:05:42.982 "params": { 00:05:42.982 "name": "static" 00:05:42.982 } 00:05:42.982 } 00:05:42.982 ] 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "vhost_scsi", 00:05:42.982 "config": [] 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "vhost_blk", 00:05:42.982 "config": [] 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "ublk", 00:05:42.982 "config": [] 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "nbd", 00:05:42.982 "config": [] 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "nvmf", 00:05:42.982 "config": [ 00:05:42.982 { 00:05:42.982 "method": "nvmf_set_config", 00:05:42.982 "params": { 00:05:42.982 "discovery_filter": "match_any", 00:05:42.982 "admin_cmd_passthru": { 00:05:42.982 "identify_ctrlr": false 00:05:42.982 }, 00:05:42.982 "dhchap_digests": [ 00:05:42.982 "sha256", 00:05:42.982 "sha384", 00:05:42.982 "sha512" 00:05:42.982 ], 00:05:42.982 "dhchap_dhgroups": [ 00:05:42.982 "null", 00:05:42.982 "ffdhe2048", 00:05:42.982 "ffdhe3072", 00:05:42.982 "ffdhe4096", 00:05:42.982 "ffdhe6144", 00:05:42.982 "ffdhe8192" 00:05:42.982 ] 00:05:42.982 } 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "method": "nvmf_set_max_subsystems", 00:05:42.982 "params": { 00:05:42.982 "max_subsystems": 1024 00:05:42.982 } 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "method": "nvmf_set_crdt", 00:05:42.982 "params": { 00:05:42.982 "crdt1": 0, 00:05:42.982 "crdt2": 0, 00:05:42.982 "crdt3": 0 00:05:42.982 } 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "method": "nvmf_create_transport", 00:05:42.982 "params": { 00:05:42.982 "trtype": "TCP", 00:05:42.982 "max_queue_depth": 128, 00:05:42.982 "max_io_qpairs_per_ctrlr": 127, 00:05:42.982 "in_capsule_data_size": 4096, 00:05:42.982 "max_io_size": 131072, 00:05:42.982 "io_unit_size": 131072, 00:05:42.982 "max_aq_depth": 128, 00:05:42.982 "num_shared_buffers": 511, 00:05:42.982 "buf_cache_size": 4294967295, 00:05:42.982 "dif_insert_or_strip": false, 00:05:42.982 "zcopy": false, 00:05:42.982 "c2h_success": true, 00:05:42.982 "sock_priority": 0, 00:05:42.982 "abort_timeout_sec": 1, 00:05:42.982 "ack_timeout": 0, 00:05:42.982 "data_wr_pool_size": 0 00:05:42.982 } 00:05:42.982 } 00:05:42.982 ] 00:05:42.982 }, 00:05:42.982 { 00:05:42.982 "subsystem": "iscsi", 00:05:42.982 "config": [ 00:05:42.982 { 00:05:42.982 "method": "iscsi_set_options", 00:05:42.982 "params": { 00:05:42.982 "node_base": "iqn.2016-06.io.spdk", 00:05:42.982 "max_sessions": 128, 00:05:42.982 "max_connections_per_session": 2, 00:05:42.982 "max_queue_depth": 64, 00:05:42.982 "default_time2wait": 2, 00:05:42.982 "default_time2retain": 20, 00:05:42.982 "first_burst_length": 8192, 00:05:42.982 "immediate_data": true, 00:05:42.982 "allow_duplicated_isid": false, 00:05:42.982 "error_recovery_level": 0, 00:05:42.982 "nop_timeout": 60, 00:05:42.982 "nop_in_interval": 30, 00:05:42.982 "disable_chap": false, 00:05:42.982 "require_chap": false, 00:05:42.982 "mutual_chap": false, 00:05:42.982 "chap_group": 0, 00:05:42.982 "max_large_datain_per_connection": 64, 00:05:42.982 "max_r2t_per_connection": 4, 00:05:42.982 "pdu_pool_size": 36864, 00:05:42.982 "immediate_data_pool_size": 16384, 00:05:42.982 "data_out_pool_size": 2048 00:05:42.982 } 00:05:42.982 } 00:05:42.982 ] 00:05:42.982 } 00:05:42.982 ] 00:05:42.982 } 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71010 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71010 ']' 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71010 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71010 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.982 killing process with pid 71010 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71010' 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71010 00:05:42.982 15:21:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71010 00:05:43.553 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71040 00:05:43.553 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.553 15:21:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71040 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71040 ']' 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71040 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71040 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.831 killing process with pid 71040 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71040' 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71040 00:05:48.831 15:21:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71040 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.831 00:05:48.831 real 0m6.949s 00:05:48.831 user 0m6.525s 00:05:48.831 sys 0m0.736s 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.831 ************************************ 00:05:48.831 END TEST skip_rpc_with_json 00:05:48.831 ************************************ 00:05:48.831 15:21:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.831 15:21:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.831 15:21:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.831 15:21:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.831 ************************************ 00:05:48.831 START TEST skip_rpc_with_delay 00:05:48.831 ************************************ 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.831 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:49.091 [2024-11-26 15:21:47.340562] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:49.091 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:49.091 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:49.091 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:49.091 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:49.091 00:05:49.091 real 0m0.173s 00:05:49.091 user 0m0.103s 00:05:49.091 sys 0m0.068s 00:05:49.091 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.091 15:21:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:49.091 ************************************ 00:05:49.091 END TEST skip_rpc_with_delay 00:05:49.091 ************************************ 00:05:49.091 15:21:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:49.091 15:21:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:49.091 15:21:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:49.091 15:21:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.091 15:21:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.091 15:21:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.091 ************************************ 00:05:49.091 START TEST exit_on_failed_rpc_init 00:05:49.091 ************************************ 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71146 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71146 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71146 ']' 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.091 15:21:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.351 [2024-11-26 15:21:47.585638] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:49.351 [2024-11-26 15:21:47.585785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71146 ] 00:05:49.351 [2024-11-26 15:21:47.721046] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.351 [2024-11-26 15:21:47.760985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.351 [2024-11-26 15:21:47.786615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:49.922 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.182 [2024-11-26 15:21:48.488225] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:50.182 [2024-11-26 15:21:48.488363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:05:50.182 [2024-11-26 15:21:48.626639] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.443 [2024-11-26 15:21:48.664154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.443 [2024-11-26 15:21:48.691499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.443 [2024-11-26 15:21:48.691612] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:50.443 [2024-11-26 15:21:48.691632] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:50.443 [2024-11-26 15:21:48.691643] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71146 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71146 ']' 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71146 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71146 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.443 killing process with pid 71146 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71146' 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71146 00:05:50.443 15:21:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71146 00:05:51.013 00:05:51.013 real 0m1.697s 00:05:51.013 user 0m1.805s 00:05:51.013 sys 0m0.502s 00:05:51.013 15:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.013 15:21:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.013 ************************************ 00:05:51.013 END TEST exit_on_failed_rpc_init 00:05:51.013 ************************************ 00:05:51.013 15:21:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.013 00:05:51.013 real 0m14.747s 00:05:51.013 user 0m13.650s 00:05:51.013 sys 0m1.964s 00:05:51.013 15:21:49 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.013 15:21:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.013 ************************************ 00:05:51.013 END TEST skip_rpc 00:05:51.013 ************************************ 00:05:51.013 15:21:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.013 15:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.013 15:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.013 15:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.013 ************************************ 00:05:51.013 START TEST rpc_client 00:05:51.013 ************************************ 00:05:51.013 15:21:49 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:51.013 * Looking for test storage... 00:05:51.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:51.013 15:21:49 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.013 15:21:49 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.013 15:21:49 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.273 15:21:49 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.273 15:21:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.274 15:21:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:51.274 15:21:49 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.274 15:21:49 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.274 --rc genhtml_branch_coverage=1 00:05:51.274 --rc genhtml_function_coverage=1 00:05:51.274 --rc genhtml_legend=1 00:05:51.274 --rc geninfo_all_blocks=1 00:05:51.274 --rc geninfo_unexecuted_blocks=1 00:05:51.274 00:05:51.274 ' 00:05:51.274 15:21:49 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.274 --rc genhtml_branch_coverage=1 00:05:51.274 --rc genhtml_function_coverage=1 00:05:51.274 --rc genhtml_legend=1 00:05:51.274 --rc geninfo_all_blocks=1 00:05:51.274 --rc geninfo_unexecuted_blocks=1 00:05:51.274 00:05:51.274 ' 00:05:51.274 15:21:49 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.274 --rc genhtml_branch_coverage=1 00:05:51.274 --rc genhtml_function_coverage=1 00:05:51.274 --rc genhtml_legend=1 00:05:51.274 --rc geninfo_all_blocks=1 00:05:51.274 --rc geninfo_unexecuted_blocks=1 00:05:51.274 00:05:51.274 ' 00:05:51.274 15:21:49 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.274 --rc genhtml_branch_coverage=1 00:05:51.274 --rc genhtml_function_coverage=1 00:05:51.274 --rc genhtml_legend=1 00:05:51.274 --rc geninfo_all_blocks=1 00:05:51.274 --rc geninfo_unexecuted_blocks=1 00:05:51.274 00:05:51.274 ' 00:05:51.274 15:21:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:51.274 OK 00:05:51.274 15:21:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:51.274 00:05:51.274 real 0m0.294s 00:05:51.274 user 0m0.163s 00:05:51.274 sys 0m0.147s 00:05:51.274 15:21:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.274 15:21:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:51.274 ************************************ 00:05:51.274 END TEST rpc_client 00:05:51.274 ************************************ 00:05:51.274 15:21:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:51.274 15:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.274 15:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.274 15:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.274 ************************************ 00:05:51.274 START TEST json_config 00:05:51.274 ************************************ 00:05:51.274 15:21:49 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:51.534 15:21:49 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.534 15:21:49 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.534 15:21:49 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.534 15:21:49 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.534 15:21:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.534 15:21:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.534 15:21:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.534 15:21:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.534 15:21:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.534 15:21:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.534 15:21:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.534 15:21:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.534 15:21:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.534 15:21:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.535 15:21:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.535 15:21:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:51.535 15:21:49 json_config -- scripts/common.sh@345 -- # : 1 00:05:51.535 15:21:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.535 15:21:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.535 15:21:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:51.535 15:21:49 json_config -- scripts/common.sh@353 -- # local d=1 00:05:51.535 15:21:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.535 15:21:49 json_config -- scripts/common.sh@355 -- # echo 1 00:05:51.535 15:21:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.535 15:21:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:51.535 15:21:49 json_config -- scripts/common.sh@353 -- # local d=2 00:05:51.535 15:21:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.535 15:21:49 json_config -- scripts/common.sh@355 -- # echo 2 00:05:51.535 15:21:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.535 15:21:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.535 15:21:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.535 15:21:49 json_config -- scripts/common.sh@368 -- # return 0 00:05:51.535 15:21:49 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.535 15:21:49 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.535 --rc genhtml_branch_coverage=1 00:05:51.535 --rc genhtml_function_coverage=1 00:05:51.535 --rc genhtml_legend=1 00:05:51.535 --rc geninfo_all_blocks=1 00:05:51.535 --rc geninfo_unexecuted_blocks=1 00:05:51.535 00:05:51.535 ' 00:05:51.535 15:21:49 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.535 --rc genhtml_branch_coverage=1 00:05:51.535 --rc genhtml_function_coverage=1 00:05:51.535 --rc genhtml_legend=1 00:05:51.535 --rc geninfo_all_blocks=1 00:05:51.535 --rc geninfo_unexecuted_blocks=1 00:05:51.535 00:05:51.535 ' 00:05:51.535 15:21:49 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.535 --rc genhtml_branch_coverage=1 00:05:51.535 --rc genhtml_function_coverage=1 00:05:51.535 --rc genhtml_legend=1 00:05:51.535 --rc geninfo_all_blocks=1 00:05:51.535 --rc geninfo_unexecuted_blocks=1 00:05:51.535 00:05:51.535 ' 00:05:51.535 15:21:49 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.535 --rc genhtml_branch_coverage=1 00:05:51.535 --rc genhtml_function_coverage=1 00:05:51.535 --rc genhtml_legend=1 00:05:51.535 --rc geninfo_all_blocks=1 00:05:51.535 --rc geninfo_unexecuted_blocks=1 00:05:51.535 00:05:51.535 ' 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:642ac8ad-f34e-486b-a948-772d46b362cb 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=642ac8ad-f34e-486b-a948-772d46b362cb 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.535 15:21:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.535 15:21:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.535 15:21:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.535 15:21:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.535 15:21:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.535 15:21:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.535 15:21:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.535 15:21:49 json_config -- paths/export.sh@5 -- # export PATH 00:05:51.535 15:21:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@51 -- # : 0 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.535 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.535 15:21:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:51.535 WARNING: No tests are enabled so not running JSON configuration tests 00:05:51.535 15:21:49 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:51.535 00:05:51.535 real 0m0.227s 00:05:51.535 user 0m0.141s 00:05:51.535 sys 0m0.089s 00:05:51.535 15:21:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.535 15:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.535 ************************************ 00:05:51.535 END TEST json_config 00:05:51.535 ************************************ 00:05:51.535 15:21:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.535 15:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.535 15:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.535 15:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.535 ************************************ 00:05:51.535 START TEST json_config_extra_key 00:05:51.535 ************************************ 00:05:51.535 15:21:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.796 15:21:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.796 15:21:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.796 15:21:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.796 15:21:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.796 15:21:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:51.796 15:21:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.797 --rc genhtml_branch_coverage=1 00:05:51.797 --rc genhtml_function_coverage=1 00:05:51.797 --rc genhtml_legend=1 00:05:51.797 --rc geninfo_all_blocks=1 00:05:51.797 --rc geninfo_unexecuted_blocks=1 00:05:51.797 00:05:51.797 ' 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.797 --rc genhtml_branch_coverage=1 00:05:51.797 --rc genhtml_function_coverage=1 00:05:51.797 --rc genhtml_legend=1 00:05:51.797 --rc geninfo_all_blocks=1 00:05:51.797 --rc geninfo_unexecuted_blocks=1 00:05:51.797 00:05:51.797 ' 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.797 --rc genhtml_branch_coverage=1 00:05:51.797 --rc genhtml_function_coverage=1 00:05:51.797 --rc genhtml_legend=1 00:05:51.797 --rc geninfo_all_blocks=1 00:05:51.797 --rc geninfo_unexecuted_blocks=1 00:05:51.797 00:05:51.797 ' 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.797 --rc genhtml_branch_coverage=1 00:05:51.797 --rc genhtml_function_coverage=1 00:05:51.797 --rc genhtml_legend=1 00:05:51.797 --rc geninfo_all_blocks=1 00:05:51.797 --rc geninfo_unexecuted_blocks=1 00:05:51.797 00:05:51.797 ' 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:642ac8ad-f34e-486b-a948-772d46b362cb 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=642ac8ad-f34e-486b-a948-772d46b362cb 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.797 15:21:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.797 15:21:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.797 15:21:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.797 15:21:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.797 15:21:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.797 15:21:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.797 15:21:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.797 15:21:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:51.797 15:21:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.797 15:21:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:51.797 INFO: launching applications... 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:51.797 15:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71352 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.797 Waiting for target to run... 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71352 /var/tmp/spdk_tgt.sock 00:05:51.797 15:21:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71352 ']' 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.797 15:21:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:52.057 [2024-11-26 15:21:50.283215] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:52.057 [2024-11-26 15:21:50.283733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ] 00:05:52.316 [2024-11-26 15:21:50.642619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.316 [2024-11-26 15:21:50.681436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.316 [2024-11-26 15:21:50.698695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.886 00:05:52.886 INFO: shutting down applications... 00:05:52.886 15:21:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.886 15:21:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:52.886 15:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:52.886 15:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71352 ]] 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71352 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71352 00:05:52.886 15:21:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.180 15:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.181 15:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.181 15:21:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71352 00:05:53.181 15:21:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:53.181 15:21:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:53.181 15:21:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:53.181 15:21:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:53.181 SPDK target shutdown done 00:05:53.181 15:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:53.181 Success 00:05:53.181 00:05:53.181 real 0m1.648s 00:05:53.181 user 0m1.313s 00:05:53.181 sys 0m0.503s 00:05:53.181 15:21:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.181 15:21:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:53.181 ************************************ 00:05:53.181 END TEST json_config_extra_key 00:05:53.181 ************************************ 00:05:53.457 15:21:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.457 15:21:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.457 15:21:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.457 15:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:53.457 ************************************ 00:05:53.457 START TEST alias_rpc 00:05:53.457 ************************************ 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.457 * Looking for test storage... 00:05:53.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.457 15:21:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.457 --rc genhtml_branch_coverage=1 00:05:53.457 --rc genhtml_function_coverage=1 00:05:53.457 --rc genhtml_legend=1 00:05:53.457 --rc geninfo_all_blocks=1 00:05:53.457 --rc geninfo_unexecuted_blocks=1 00:05:53.457 00:05:53.457 ' 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.457 --rc genhtml_branch_coverage=1 00:05:53.457 --rc genhtml_function_coverage=1 00:05:53.457 --rc genhtml_legend=1 00:05:53.457 --rc geninfo_all_blocks=1 00:05:53.457 --rc geninfo_unexecuted_blocks=1 00:05:53.457 00:05:53.457 ' 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.457 --rc genhtml_branch_coverage=1 00:05:53.457 --rc genhtml_function_coverage=1 00:05:53.457 --rc genhtml_legend=1 00:05:53.457 --rc geninfo_all_blocks=1 00:05:53.457 --rc geninfo_unexecuted_blocks=1 00:05:53.457 00:05:53.457 ' 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.457 --rc genhtml_branch_coverage=1 00:05:53.457 --rc genhtml_function_coverage=1 00:05:53.457 --rc genhtml_legend=1 00:05:53.457 --rc geninfo_all_blocks=1 00:05:53.457 --rc geninfo_unexecuted_blocks=1 00:05:53.457 00:05:53.457 ' 00:05:53.457 15:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.457 15:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71431 00:05:53.457 15:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.457 15:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71431 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71431 ']' 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.457 15:21:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.716 [2024-11-26 15:21:51.982881] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:53.716 [2024-11-26 15:21:51.983131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71431 ] 00:05:53.716 [2024-11-26 15:21:52.117894] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.716 [2024-11-26 15:21:52.156228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.716 [2024-11-26 15:21:52.181576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.653 15:21:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.653 15:21:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.653 15:21:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:54.653 15:21:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71431 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71431 ']' 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71431 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71431 00:05:54.653 killing process with pid 71431 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71431' 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 71431 00:05:54.653 15:21:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 71431 00:05:55.223 00:05:55.223 real 0m1.735s 00:05:55.223 user 0m1.747s 00:05:55.223 sys 0m0.492s 00:05:55.223 ************************************ 00:05:55.223 END TEST alias_rpc 00:05:55.223 ************************************ 00:05:55.223 15:21:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.223 15:21:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.223 15:21:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:55.223 15:21:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:55.223 15:21:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.223 15:21:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.223 15:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.223 ************************************ 00:05:55.223 START TEST spdkcli_tcp 00:05:55.223 ************************************ 00:05:55.223 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:55.223 * Looking for test storage... 00:05:55.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:55.224 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.224 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.224 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.224 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.224 15:21:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:55.484 15:21:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.484 15:21:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.484 15:21:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.484 15:21:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.484 --rc genhtml_branch_coverage=1 00:05:55.484 --rc genhtml_function_coverage=1 00:05:55.484 --rc genhtml_legend=1 00:05:55.484 --rc geninfo_all_blocks=1 00:05:55.484 --rc geninfo_unexecuted_blocks=1 00:05:55.484 00:05:55.484 ' 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.484 --rc genhtml_branch_coverage=1 00:05:55.484 --rc genhtml_function_coverage=1 00:05:55.484 --rc genhtml_legend=1 00:05:55.484 --rc geninfo_all_blocks=1 00:05:55.484 --rc geninfo_unexecuted_blocks=1 00:05:55.484 00:05:55.484 ' 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.484 --rc genhtml_branch_coverage=1 00:05:55.484 --rc genhtml_function_coverage=1 00:05:55.484 --rc genhtml_legend=1 00:05:55.484 --rc geninfo_all_blocks=1 00:05:55.484 --rc geninfo_unexecuted_blocks=1 00:05:55.484 00:05:55.484 ' 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.484 --rc genhtml_branch_coverage=1 00:05:55.484 --rc genhtml_function_coverage=1 00:05:55.484 --rc genhtml_legend=1 00:05:55.484 --rc geninfo_all_blocks=1 00:05:55.484 --rc geninfo_unexecuted_blocks=1 00:05:55.484 00:05:55.484 ' 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71505 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:55.484 15:21:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71505 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71505 ']' 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.484 15:21:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.485 [2024-11-26 15:21:53.803107] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:55.485 [2024-11-26 15:21:53.803684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71505 ] 00:05:55.485 [2024-11-26 15:21:53.938470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.745 [2024-11-26 15:21:53.978948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.745 [2024-11-26 15:21:54.006328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.745 [2024-11-26 15:21:54.006433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.316 15:21:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.316 15:21:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:56.316 15:21:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71522 00:05:56.316 15:21:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:56.316 15:21:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:56.576 [ 00:05:56.576 "bdev_malloc_delete", 00:05:56.576 "bdev_malloc_create", 00:05:56.576 "bdev_null_resize", 00:05:56.576 "bdev_null_delete", 00:05:56.576 "bdev_null_create", 00:05:56.576 "bdev_nvme_cuse_unregister", 00:05:56.576 "bdev_nvme_cuse_register", 00:05:56.576 "bdev_opal_new_user", 00:05:56.576 "bdev_opal_set_lock_state", 00:05:56.576 "bdev_opal_delete", 00:05:56.576 "bdev_opal_get_info", 00:05:56.576 "bdev_opal_create", 00:05:56.576 "bdev_nvme_opal_revert", 00:05:56.576 "bdev_nvme_opal_init", 00:05:56.576 "bdev_nvme_send_cmd", 00:05:56.576 "bdev_nvme_set_keys", 00:05:56.576 "bdev_nvme_get_path_iostat", 00:05:56.576 "bdev_nvme_get_mdns_discovery_info", 00:05:56.576 "bdev_nvme_stop_mdns_discovery", 00:05:56.576 "bdev_nvme_start_mdns_discovery", 00:05:56.576 "bdev_nvme_set_multipath_policy", 00:05:56.576 "bdev_nvme_set_preferred_path", 00:05:56.576 "bdev_nvme_get_io_paths", 00:05:56.576 "bdev_nvme_remove_error_injection", 00:05:56.576 "bdev_nvme_add_error_injection", 00:05:56.576 "bdev_nvme_get_discovery_info", 00:05:56.576 "bdev_nvme_stop_discovery", 00:05:56.576 "bdev_nvme_start_discovery", 00:05:56.576 "bdev_nvme_get_controller_health_info", 00:05:56.576 "bdev_nvme_disable_controller", 00:05:56.576 "bdev_nvme_enable_controller", 00:05:56.576 "bdev_nvme_reset_controller", 00:05:56.576 "bdev_nvme_get_transport_statistics", 00:05:56.576 "bdev_nvme_apply_firmware", 00:05:56.576 "bdev_nvme_detach_controller", 00:05:56.576 "bdev_nvme_get_controllers", 00:05:56.576 "bdev_nvme_attach_controller", 00:05:56.576 "bdev_nvme_set_hotplug", 00:05:56.576 "bdev_nvme_set_options", 00:05:56.576 "bdev_passthru_delete", 00:05:56.576 "bdev_passthru_create", 00:05:56.576 "bdev_lvol_set_parent_bdev", 00:05:56.576 "bdev_lvol_set_parent", 00:05:56.576 "bdev_lvol_check_shallow_copy", 00:05:56.576 "bdev_lvol_start_shallow_copy", 00:05:56.576 "bdev_lvol_grow_lvstore", 00:05:56.576 "bdev_lvol_get_lvols", 00:05:56.576 "bdev_lvol_get_lvstores", 00:05:56.576 "bdev_lvol_delete", 00:05:56.576 "bdev_lvol_set_read_only", 00:05:56.576 "bdev_lvol_resize", 00:05:56.576 "bdev_lvol_decouple_parent", 00:05:56.576 "bdev_lvol_inflate", 00:05:56.576 "bdev_lvol_rename", 00:05:56.576 "bdev_lvol_clone_bdev", 00:05:56.576 "bdev_lvol_clone", 00:05:56.576 "bdev_lvol_snapshot", 00:05:56.576 "bdev_lvol_create", 00:05:56.576 "bdev_lvol_delete_lvstore", 00:05:56.576 "bdev_lvol_rename_lvstore", 00:05:56.576 "bdev_lvol_create_lvstore", 00:05:56.576 "bdev_raid_set_options", 00:05:56.576 "bdev_raid_remove_base_bdev", 00:05:56.576 "bdev_raid_add_base_bdev", 00:05:56.576 "bdev_raid_delete", 00:05:56.576 "bdev_raid_create", 00:05:56.576 "bdev_raid_get_bdevs", 00:05:56.576 "bdev_error_inject_error", 00:05:56.576 "bdev_error_delete", 00:05:56.576 "bdev_error_create", 00:05:56.576 "bdev_split_delete", 00:05:56.576 "bdev_split_create", 00:05:56.576 "bdev_delay_delete", 00:05:56.576 "bdev_delay_create", 00:05:56.576 "bdev_delay_update_latency", 00:05:56.576 "bdev_zone_block_delete", 00:05:56.576 "bdev_zone_block_create", 00:05:56.576 "blobfs_create", 00:05:56.576 "blobfs_detect", 00:05:56.576 "blobfs_set_cache_size", 00:05:56.576 "bdev_aio_delete", 00:05:56.576 "bdev_aio_rescan", 00:05:56.576 "bdev_aio_create", 00:05:56.576 "bdev_ftl_set_property", 00:05:56.576 "bdev_ftl_get_properties", 00:05:56.576 "bdev_ftl_get_stats", 00:05:56.576 "bdev_ftl_unmap", 00:05:56.576 "bdev_ftl_unload", 00:05:56.576 "bdev_ftl_delete", 00:05:56.576 "bdev_ftl_load", 00:05:56.576 "bdev_ftl_create", 00:05:56.576 "bdev_virtio_attach_controller", 00:05:56.576 "bdev_virtio_scsi_get_devices", 00:05:56.576 "bdev_virtio_detach_controller", 00:05:56.576 "bdev_virtio_blk_set_hotplug", 00:05:56.576 "bdev_iscsi_delete", 00:05:56.576 "bdev_iscsi_create", 00:05:56.576 "bdev_iscsi_set_options", 00:05:56.576 "accel_error_inject_error", 00:05:56.576 "ioat_scan_accel_module", 00:05:56.576 "dsa_scan_accel_module", 00:05:56.576 "iaa_scan_accel_module", 00:05:56.576 "keyring_file_remove_key", 00:05:56.576 "keyring_file_add_key", 00:05:56.576 "keyring_linux_set_options", 00:05:56.576 "fsdev_aio_delete", 00:05:56.576 "fsdev_aio_create", 00:05:56.576 "iscsi_get_histogram", 00:05:56.576 "iscsi_enable_histogram", 00:05:56.576 "iscsi_set_options", 00:05:56.576 "iscsi_get_auth_groups", 00:05:56.576 "iscsi_auth_group_remove_secret", 00:05:56.576 "iscsi_auth_group_add_secret", 00:05:56.576 "iscsi_delete_auth_group", 00:05:56.576 "iscsi_create_auth_group", 00:05:56.576 "iscsi_set_discovery_auth", 00:05:56.576 "iscsi_get_options", 00:05:56.576 "iscsi_target_node_request_logout", 00:05:56.576 "iscsi_target_node_set_redirect", 00:05:56.576 "iscsi_target_node_set_auth", 00:05:56.576 "iscsi_target_node_add_lun", 00:05:56.576 "iscsi_get_stats", 00:05:56.576 "iscsi_get_connections", 00:05:56.576 "iscsi_portal_group_set_auth", 00:05:56.576 "iscsi_start_portal_group", 00:05:56.576 "iscsi_delete_portal_group", 00:05:56.576 "iscsi_create_portal_group", 00:05:56.576 "iscsi_get_portal_groups", 00:05:56.576 "iscsi_delete_target_node", 00:05:56.576 "iscsi_target_node_remove_pg_ig_maps", 00:05:56.576 "iscsi_target_node_add_pg_ig_maps", 00:05:56.576 "iscsi_create_target_node", 00:05:56.576 "iscsi_get_target_nodes", 00:05:56.576 "iscsi_delete_initiator_group", 00:05:56.576 "iscsi_initiator_group_remove_initiators", 00:05:56.576 "iscsi_initiator_group_add_initiators", 00:05:56.577 "iscsi_create_initiator_group", 00:05:56.577 "iscsi_get_initiator_groups", 00:05:56.577 "nvmf_set_crdt", 00:05:56.577 "nvmf_set_config", 00:05:56.577 "nvmf_set_max_subsystems", 00:05:56.577 "nvmf_stop_mdns_prr", 00:05:56.577 "nvmf_publish_mdns_prr", 00:05:56.577 "nvmf_subsystem_get_listeners", 00:05:56.577 "nvmf_subsystem_get_qpairs", 00:05:56.577 "nvmf_subsystem_get_controllers", 00:05:56.577 "nvmf_get_stats", 00:05:56.577 "nvmf_get_transports", 00:05:56.577 "nvmf_create_transport", 00:05:56.577 "nvmf_get_targets", 00:05:56.577 "nvmf_delete_target", 00:05:56.577 "nvmf_create_target", 00:05:56.577 "nvmf_subsystem_allow_any_host", 00:05:56.577 "nvmf_subsystem_set_keys", 00:05:56.577 "nvmf_subsystem_remove_host", 00:05:56.577 "nvmf_subsystem_add_host", 00:05:56.577 "nvmf_ns_remove_host", 00:05:56.577 "nvmf_ns_add_host", 00:05:56.577 "nvmf_subsystem_remove_ns", 00:05:56.577 "nvmf_subsystem_set_ns_ana_group", 00:05:56.577 "nvmf_subsystem_add_ns", 00:05:56.577 "nvmf_subsystem_listener_set_ana_state", 00:05:56.577 "nvmf_discovery_get_referrals", 00:05:56.577 "nvmf_discovery_remove_referral", 00:05:56.577 "nvmf_discovery_add_referral", 00:05:56.577 "nvmf_subsystem_remove_listener", 00:05:56.577 "nvmf_subsystem_add_listener", 00:05:56.577 "nvmf_delete_subsystem", 00:05:56.577 "nvmf_create_subsystem", 00:05:56.577 "nvmf_get_subsystems", 00:05:56.577 "env_dpdk_get_mem_stats", 00:05:56.577 "nbd_get_disks", 00:05:56.577 "nbd_stop_disk", 00:05:56.577 "nbd_start_disk", 00:05:56.577 "ublk_recover_disk", 00:05:56.577 "ublk_get_disks", 00:05:56.577 "ublk_stop_disk", 00:05:56.577 "ublk_start_disk", 00:05:56.577 "ublk_destroy_target", 00:05:56.577 "ublk_create_target", 00:05:56.577 "virtio_blk_create_transport", 00:05:56.577 "virtio_blk_get_transports", 00:05:56.577 "vhost_controller_set_coalescing", 00:05:56.577 "vhost_get_controllers", 00:05:56.577 "vhost_delete_controller", 00:05:56.577 "vhost_create_blk_controller", 00:05:56.577 "vhost_scsi_controller_remove_target", 00:05:56.577 "vhost_scsi_controller_add_target", 00:05:56.577 "vhost_start_scsi_controller", 00:05:56.577 "vhost_create_scsi_controller", 00:05:56.577 "thread_set_cpumask", 00:05:56.577 "scheduler_set_options", 00:05:56.577 "framework_get_governor", 00:05:56.577 "framework_get_scheduler", 00:05:56.577 "framework_set_scheduler", 00:05:56.577 "framework_get_reactors", 00:05:56.577 "thread_get_io_channels", 00:05:56.577 "thread_get_pollers", 00:05:56.577 "thread_get_stats", 00:05:56.577 "framework_monitor_context_switch", 00:05:56.577 "spdk_kill_instance", 00:05:56.577 "log_enable_timestamps", 00:05:56.577 "log_get_flags", 00:05:56.577 "log_clear_flag", 00:05:56.577 "log_set_flag", 00:05:56.577 "log_get_level", 00:05:56.577 "log_set_level", 00:05:56.577 "log_get_print_level", 00:05:56.577 "log_set_print_level", 00:05:56.577 "framework_enable_cpumask_locks", 00:05:56.577 "framework_disable_cpumask_locks", 00:05:56.577 "framework_wait_init", 00:05:56.577 "framework_start_init", 00:05:56.577 "scsi_get_devices", 00:05:56.577 "bdev_get_histogram", 00:05:56.577 "bdev_enable_histogram", 00:05:56.577 "bdev_set_qos_limit", 00:05:56.577 "bdev_set_qd_sampling_period", 00:05:56.577 "bdev_get_bdevs", 00:05:56.577 "bdev_reset_iostat", 00:05:56.577 "bdev_get_iostat", 00:05:56.577 "bdev_examine", 00:05:56.577 "bdev_wait_for_examine", 00:05:56.577 "bdev_set_options", 00:05:56.577 "accel_get_stats", 00:05:56.577 "accel_set_options", 00:05:56.577 "accel_set_driver", 00:05:56.577 "accel_crypto_key_destroy", 00:05:56.577 "accel_crypto_keys_get", 00:05:56.577 "accel_crypto_key_create", 00:05:56.577 "accel_assign_opc", 00:05:56.577 "accel_get_module_info", 00:05:56.577 "accel_get_opc_assignments", 00:05:56.577 "vmd_rescan", 00:05:56.577 "vmd_remove_device", 00:05:56.577 "vmd_enable", 00:05:56.577 "sock_get_default_impl", 00:05:56.577 "sock_set_default_impl", 00:05:56.577 "sock_impl_set_options", 00:05:56.577 "sock_impl_get_options", 00:05:56.577 "iobuf_get_stats", 00:05:56.577 "iobuf_set_options", 00:05:56.577 "keyring_get_keys", 00:05:56.577 "framework_get_pci_devices", 00:05:56.577 "framework_get_config", 00:05:56.577 "framework_get_subsystems", 00:05:56.577 "fsdev_set_opts", 00:05:56.577 "fsdev_get_opts", 00:05:56.577 "trace_get_info", 00:05:56.577 "trace_get_tpoint_group_mask", 00:05:56.577 "trace_disable_tpoint_group", 00:05:56.577 "trace_enable_tpoint_group", 00:05:56.577 "trace_clear_tpoint_mask", 00:05:56.577 "trace_set_tpoint_mask", 00:05:56.577 "notify_get_notifications", 00:05:56.577 "notify_get_types", 00:05:56.577 "spdk_get_version", 00:05:56.577 "rpc_get_methods" 00:05:56.577 ] 00:05:56.577 15:21:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.577 15:21:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:56.577 15:21:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71505 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71505 ']' 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71505 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71505 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71505' 00:05:56.577 killing process with pid 71505 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71505 00:05:56.577 15:21:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71505 00:05:56.837 00:05:56.837 real 0m1.796s 00:05:56.837 user 0m2.984s 00:05:56.837 sys 0m0.572s 00:05:56.837 15:21:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.837 15:21:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.837 ************************************ 00:05:56.837 END TEST spdkcli_tcp 00:05:56.837 ************************************ 00:05:57.098 15:21:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.098 15:21:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.098 15:21:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.098 15:21:55 -- common/autotest_common.sh@10 -- # set +x 00:05:57.098 ************************************ 00:05:57.098 START TEST dpdk_mem_utility 00:05:57.098 ************************************ 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:57.098 * Looking for test storage... 00:05:57.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.098 15:21:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.098 --rc genhtml_branch_coverage=1 00:05:57.098 --rc genhtml_function_coverage=1 00:05:57.098 --rc genhtml_legend=1 00:05:57.098 --rc geninfo_all_blocks=1 00:05:57.098 --rc geninfo_unexecuted_blocks=1 00:05:57.098 00:05:57.098 ' 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.098 --rc genhtml_branch_coverage=1 00:05:57.098 --rc genhtml_function_coverage=1 00:05:57.098 --rc genhtml_legend=1 00:05:57.098 --rc geninfo_all_blocks=1 00:05:57.098 --rc geninfo_unexecuted_blocks=1 00:05:57.098 00:05:57.098 ' 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.098 --rc genhtml_branch_coverage=1 00:05:57.098 --rc genhtml_function_coverage=1 00:05:57.098 --rc genhtml_legend=1 00:05:57.098 --rc geninfo_all_blocks=1 00:05:57.098 --rc geninfo_unexecuted_blocks=1 00:05:57.098 00:05:57.098 ' 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.098 --rc genhtml_branch_coverage=1 00:05:57.098 --rc genhtml_function_coverage=1 00:05:57.098 --rc genhtml_legend=1 00:05:57.098 --rc geninfo_all_blocks=1 00:05:57.098 --rc geninfo_unexecuted_blocks=1 00:05:57.098 00:05:57.098 ' 00:05:57.098 15:21:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.098 15:21:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71605 00:05:57.098 15:21:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.098 15:21:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71605 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71605 ']' 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.098 15:21:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.358 [2024-11-26 15:21:55.658531] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:57.358 [2024-11-26 15:21:55.658649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71605 ] 00:05:57.359 [2024-11-26 15:21:55.792989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.619 [2024-11-26 15:21:55.833058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.619 [2024-11-26 15:21:55.858197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.192 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.192 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:58.192 15:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:58.192 15:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:58.192 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.192 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.192 { 00:05:58.192 "filename": "/tmp/spdk_mem_dump.txt" 00:05:58.192 } 00:05:58.192 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.192 15:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:58.192 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:58.192 1 heaps totaling size 810.000000 MiB 00:05:58.192 size: 810.000000 MiB heap id: 0 00:05:58.192 end heaps---------- 00:05:58.192 9 mempools totaling size 595.772034 MiB 00:05:58.192 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:58.192 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:58.192 size: 92.545471 MiB name: bdev_io_71605 00:05:58.192 size: 50.003479 MiB name: msgpool_71605 00:05:58.192 size: 36.509338 MiB name: fsdev_io_71605 00:05:58.192 size: 21.763794 MiB name: PDU_Pool 00:05:58.192 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:58.192 size: 4.133484 MiB name: evtpool_71605 00:05:58.192 size: 0.026123 MiB name: Session_Pool 00:05:58.192 end mempools------- 00:05:58.192 6 memzones totaling size 4.142822 MiB 00:05:58.192 size: 1.000366 MiB name: RG_ring_0_71605 00:05:58.192 size: 1.000366 MiB name: RG_ring_1_71605 00:05:58.192 size: 1.000366 MiB name: RG_ring_4_71605 00:05:58.192 size: 1.000366 MiB name: RG_ring_5_71605 00:05:58.192 size: 0.125366 MiB name: RG_ring_2_71605 00:05:58.192 size: 0.015991 MiB name: RG_ring_3_71605 00:05:58.192 end memzones------- 00:05:58.192 15:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:58.192 heap id: 0 total size: 810.000000 MiB number of busy elements: 309 number of free elements: 15 00:05:58.192 list of free elements. size: 10.954529 MiB 00:05:58.192 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:58.192 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:58.192 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:58.192 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:58.192 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:58.192 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:58.192 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:58.192 element at address: 0x200000200000 with size: 0.858093 MiB 00:05:58.192 element at address: 0x20001a600000 with size: 0.568237 MiB 00:05:58.192 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:58.192 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:58.192 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:58.192 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:58.192 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:58.192 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:58.192 list of standard malloc elements. size: 199.126587 MiB 00:05:58.192 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:58.192 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:58.192 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:58.192 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:58.192 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:58.192 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:58.192 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:58.192 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:58.192 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:58.192 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:58.192 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:58.193 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:58.193 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:58.193 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:58.194 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:58.194 list of memzone associated elements. size: 599.918884 MiB 00:05:58.194 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:58.194 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:58.194 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:58.194 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:58.194 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:58.194 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71605_0 00:05:58.194 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:58.194 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71605_0 00:05:58.194 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:58.194 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71605_0 00:05:58.194 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:58.194 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:58.194 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:58.194 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:58.194 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:58.194 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71605_0 00:05:58.194 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:58.194 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71605 00:05:58.194 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:05:58.194 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71605 00:05:58.194 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:58.194 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:58.194 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:58.194 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:58.194 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:58.194 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:58.194 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:58.194 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:58.194 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:58.194 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71605 00:05:58.194 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:58.194 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71605 00:05:58.194 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:58.194 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71605 00:05:58.194 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:58.194 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71605 00:05:58.194 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:58.194 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71605 00:05:58.194 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:58.194 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71605 00:05:58.194 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:58.194 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:58.194 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:58.194 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:58.194 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:58.194 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:58.194 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:05:58.194 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71605 00:05:58.194 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:58.194 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71605 00:05:58.194 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:58.194 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:58.194 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:58.194 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:58.194 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:58.194 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71605 00:05:58.194 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:58.194 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:58.194 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:58.194 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71605 00:05:58.194 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:58.194 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71605 00:05:58.194 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:58.194 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71605 00:05:58.194 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:58.194 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:58.194 15:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:58.194 15:21:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71605 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71605 ']' 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71605 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71605 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71605' 00:05:58.194 killing process with pid 71605 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71605 00:05:58.194 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71605 00:05:58.765 00:05:58.765 real 0m1.647s 00:05:58.765 user 0m1.588s 00:05:58.765 sys 0m0.494s 00:05:58.765 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.765 15:21:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.765 ************************************ 00:05:58.765 END TEST dpdk_mem_utility 00:05:58.765 ************************************ 00:05:58.765 15:21:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.765 15:21:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.765 15:21:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.765 15:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:58.765 ************************************ 00:05:58.765 START TEST event 00:05:58.765 ************************************ 00:05:58.765 15:21:57 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.765 * Looking for test storage... 00:05:58.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.765 15:21:57 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.765 15:21:57 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.765 15:21:57 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.025 15:21:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.025 15:21:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.025 15:21:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.025 15:21:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.025 15:21:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.025 15:21:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.025 15:21:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.025 15:21:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.025 15:21:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.025 15:21:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.025 15:21:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.025 15:21:57 event -- scripts/common.sh@344 -- # case "$op" in 00:05:59.025 15:21:57 event -- scripts/common.sh@345 -- # : 1 00:05:59.025 15:21:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.025 15:21:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.025 15:21:57 event -- scripts/common.sh@365 -- # decimal 1 00:05:59.025 15:21:57 event -- scripts/common.sh@353 -- # local d=1 00:05:59.025 15:21:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.025 15:21:57 event -- scripts/common.sh@355 -- # echo 1 00:05:59.025 15:21:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.025 15:21:57 event -- scripts/common.sh@366 -- # decimal 2 00:05:59.025 15:21:57 event -- scripts/common.sh@353 -- # local d=2 00:05:59.025 15:21:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.025 15:21:57 event -- scripts/common.sh@355 -- # echo 2 00:05:59.025 15:21:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.025 15:21:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.025 15:21:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.025 15:21:57 event -- scripts/common.sh@368 -- # return 0 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.025 --rc genhtml_branch_coverage=1 00:05:59.025 --rc genhtml_function_coverage=1 00:05:59.025 --rc genhtml_legend=1 00:05:59.025 --rc geninfo_all_blocks=1 00:05:59.025 --rc geninfo_unexecuted_blocks=1 00:05:59.025 00:05:59.025 ' 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.025 --rc genhtml_branch_coverage=1 00:05:59.025 --rc genhtml_function_coverage=1 00:05:59.025 --rc genhtml_legend=1 00:05:59.025 --rc geninfo_all_blocks=1 00:05:59.025 --rc geninfo_unexecuted_blocks=1 00:05:59.025 00:05:59.025 ' 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.025 --rc genhtml_branch_coverage=1 00:05:59.025 --rc genhtml_function_coverage=1 00:05:59.025 --rc genhtml_legend=1 00:05:59.025 --rc geninfo_all_blocks=1 00:05:59.025 --rc geninfo_unexecuted_blocks=1 00:05:59.025 00:05:59.025 ' 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.025 --rc genhtml_branch_coverage=1 00:05:59.025 --rc genhtml_function_coverage=1 00:05:59.025 --rc genhtml_legend=1 00:05:59.025 --rc geninfo_all_blocks=1 00:05:59.025 --rc geninfo_unexecuted_blocks=1 00:05:59.025 00:05:59.025 ' 00:05:59.025 15:21:57 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:59.025 15:21:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:59.025 15:21:57 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:59.025 15:21:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.025 15:21:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.025 ************************************ 00:05:59.025 START TEST event_perf 00:05:59.025 ************************************ 00:05:59.025 15:21:57 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.025 Running I/O for 1 seconds...[2024-11-26 15:21:57.333266] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:59.025 [2024-11-26 15:21:57.333428] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71691 ] 00:05:59.025 [2024-11-26 15:21:57.466530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:59.284 [2024-11-26 15:21:57.504760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.284 Running I/O for 1 seconds...[2024-11-26 15:21:57.533371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.284 [2024-11-26 15:21:57.533597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.284 [2024-11-26 15:21:57.534084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.284 [2024-11-26 15:21:57.534265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.223 00:06:00.223 lcore 0: 210433 00:06:00.223 lcore 1: 210434 00:06:00.223 lcore 2: 210433 00:06:00.223 lcore 3: 210435 00:06:00.223 done. 00:06:00.223 ************************************ 00:06:00.223 END TEST event_perf 00:06:00.223 ************************************ 00:06:00.223 00:06:00.223 real 0m1.319s 00:06:00.223 user 0m4.086s 00:06:00.223 sys 0m0.121s 00:06:00.223 15:21:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.223 15:21:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.223 15:21:58 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.223 15:21:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:00.223 15:21:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.223 15:21:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.223 ************************************ 00:06:00.223 START TEST event_reactor 00:06:00.223 ************************************ 00:06:00.223 15:21:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.481 [2024-11-26 15:21:58.721846] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:00.481 [2024-11-26 15:21:58.722042] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71725 ] 00:06:00.481 [2024-11-26 15:21:58.853410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.481 [2024-11-26 15:21:58.889895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.481 [2024-11-26 15:21:58.915489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.861 test_start 00:06:01.861 oneshot 00:06:01.861 tick 100 00:06:01.861 tick 100 00:06:01.861 tick 250 00:06:01.861 tick 100 00:06:01.861 tick 100 00:06:01.861 tick 100 00:06:01.861 tick 250 00:06:01.861 tick 500 00:06:01.861 tick 100 00:06:01.861 tick 100 00:06:01.861 tick 250 00:06:01.861 tick 100 00:06:01.861 tick 100 00:06:01.861 test_end 00:06:01.861 00:06:01.861 real 0m1.308s 00:06:01.861 user 0m1.107s 00:06:01.861 sys 0m0.094s 00:06:01.861 15:21:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.861 15:21:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:01.861 ************************************ 00:06:01.861 END TEST event_reactor 00:06:01.861 ************************************ 00:06:01.861 15:22:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.861 15:22:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:01.861 15:22:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.861 15:22:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.861 ************************************ 00:06:01.861 START TEST event_reactor_perf 00:06:01.861 ************************************ 00:06:01.861 15:22:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.861 [2024-11-26 15:22:00.089551] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:01.861 [2024-11-26 15:22:00.089759] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71759 ] 00:06:01.861 [2024-11-26 15:22:00.221531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.861 [2024-11-26 15:22:00.256997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.861 [2024-11-26 15:22:00.283069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.241 test_start 00:06:03.241 test_end 00:06:03.241 Performance: 397260 events per second 00:06:03.241 00:06:03.241 real 0m1.304s 00:06:03.241 user 0m1.102s 00:06:03.241 sys 0m0.095s 00:06:03.241 15:22:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.241 ************************************ 00:06:03.241 END TEST event_reactor_perf 00:06:03.241 ************************************ 00:06:03.241 15:22:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.241 15:22:01 event -- event/event.sh@49 -- # uname -s 00:06:03.241 15:22:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:03.241 15:22:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.241 15:22:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.241 15:22:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.241 15:22:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.241 ************************************ 00:06:03.241 START TEST event_scheduler 00:06:03.241 ************************************ 00:06:03.241 15:22:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.241 * Looking for test storage... 00:06:03.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:03.241 15:22:01 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.241 15:22:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.241 15:22:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.241 15:22:01 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:03.241 15:22:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.242 15:22:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.242 15:22:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.242 15:22:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.242 --rc genhtml_branch_coverage=1 00:06:03.242 --rc genhtml_function_coverage=1 00:06:03.242 --rc genhtml_legend=1 00:06:03.242 --rc geninfo_all_blocks=1 00:06:03.242 --rc geninfo_unexecuted_blocks=1 00:06:03.242 00:06:03.242 ' 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.242 --rc genhtml_branch_coverage=1 00:06:03.242 --rc genhtml_function_coverage=1 00:06:03.242 --rc genhtml_legend=1 00:06:03.242 --rc geninfo_all_blocks=1 00:06:03.242 --rc geninfo_unexecuted_blocks=1 00:06:03.242 00:06:03.242 ' 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.242 --rc genhtml_branch_coverage=1 00:06:03.242 --rc genhtml_function_coverage=1 00:06:03.242 --rc genhtml_legend=1 00:06:03.242 --rc geninfo_all_blocks=1 00:06:03.242 --rc geninfo_unexecuted_blocks=1 00:06:03.242 00:06:03.242 ' 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.242 --rc genhtml_branch_coverage=1 00:06:03.242 --rc genhtml_function_coverage=1 00:06:03.242 --rc genhtml_legend=1 00:06:03.242 --rc geninfo_all_blocks=1 00:06:03.242 --rc geninfo_unexecuted_blocks=1 00:06:03.242 00:06:03.242 ' 00:06:03.242 15:22:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:03.242 15:22:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71832 00:06:03.242 15:22:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:03.242 15:22:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.242 15:22:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71832 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71832 ']' 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.242 15:22:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.502 [2024-11-26 15:22:01.733142] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:03.502 [2024-11-26 15:22:01.733357] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71832 ] 00:06:03.502 [2024-11-26 15:22:01.869139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.502 [2024-11-26 15:22:01.905012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.502 [2024-11-26 15:22:01.934660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.502 [2024-11-26 15:22:01.934800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.502 [2024-11-26 15:22:01.934886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.502 [2024-11-26 15:22:01.934970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:04.449 15:22:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:04.449 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:04.449 POWER: intel_pstate driver is not supported 00:06:04.449 POWER: cppc_cpufreq driver is not supported 00:06:04.449 POWER: amd-pstate driver is not supported 00:06:04.449 POWER: acpi-cpufreq driver is not supported 00:06:04.449 POWER: Unable to set Power Management Environment for lcore 0 00:06:04.449 [2024-11-26 15:22:02.560811] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:04.449 [2024-11-26 15:22:02.560876] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:04.449 [2024-11-26 15:22:02.560916] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:04.449 [2024-11-26 15:22:02.560982] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:04.449 [2024-11-26 15:22:02.561023] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:04.449 [2024-11-26 15:22:02.561060] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 [2024-11-26 15:22:02.637586] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 ************************************ 00:06:04.449 START TEST scheduler_create_thread 00:06:04.449 ************************************ 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 2 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 3 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 4 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 5 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 6 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 7 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 8 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.449 9 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.449 15:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.019 10 00:06:05.019 15:22:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.019 15:22:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.019 15:22:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.019 15:22:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.398 15:22:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.398 15:22:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:06.398 15:22:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:06.398 15:22:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.398 15:22:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.966 15:22:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.966 15:22:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.966 15:22:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.966 15:22:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.906 15:22:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.906 15:22:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.906 15:22:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.906 15:22:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.906 15:22:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.475 ************************************ 00:06:08.475 END TEST scheduler_create_thread 00:06:08.475 ************************************ 00:06:08.475 15:22:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.475 00:06:08.475 real 0m4.220s 00:06:08.475 user 0m0.028s 00:06:08.475 sys 0m0.009s 00:06:08.475 15:22:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.475 15:22:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.475 15:22:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.475 15:22:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71832 00:06:08.475 15:22:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71832 ']' 00:06:08.475 15:22:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71832 00:06:08.475 15:22:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:08.475 15:22:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.475 15:22:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71832 00:06:08.735 killing process with pid 71832 00:06:08.735 15:22:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:08.735 15:22:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:08.735 15:22:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71832' 00:06:08.735 15:22:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71832 00:06:08.735 15:22:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71832 00:06:08.735 [2024-11-26 15:22:07.150766] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.994 00:06:08.994 real 0m5.995s 00:06:08.994 user 0m12.888s 00:06:08.994 sys 0m0.492s 00:06:08.994 15:22:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.994 ************************************ 00:06:08.994 END TEST event_scheduler 00:06:08.994 ************************************ 00:06:08.994 15:22:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.253 15:22:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:09.253 15:22:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:09.253 15:22:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.253 15:22:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.253 15:22:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.253 ************************************ 00:06:09.253 START TEST app_repeat 00:06:09.253 ************************************ 00:06:09.253 15:22:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71944 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.253 Process app_repeat pid: 71944 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71944' 00:06:09.253 spdk_app_start Round 0 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:09.253 15:22:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71944 /var/tmp/spdk-nbd.sock 00:06:09.253 15:22:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71944 ']' 00:06:09.253 15:22:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.253 15:22:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.253 15:22:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.253 15:22:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.254 15:22:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.254 [2024-11-26 15:22:07.553218] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:09.254 [2024-11-26 15:22:07.553403] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71944 ] 00:06:09.254 [2024-11-26 15:22:07.688235] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.254 [2024-11-26 15:22:07.726091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.513 [2024-11-26 15:22:07.751615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.513 [2024-11-26 15:22:07.751736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.083 15:22:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.083 15:22:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.083 15:22:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.343 Malloc0 00:06:10.343 15:22:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.343 Malloc1 00:06:10.603 15:22:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.603 15:22:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.603 /dev/nbd0 00:06:10.603 15:22:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.603 15:22:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.603 1+0 records in 00:06:10.603 1+0 records out 00:06:10.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483443 s, 8.5 MB/s 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.603 15:22:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.863 /dev/nbd1 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.863 1+0 records in 00:06:10.863 1+0 records out 00:06:10.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231167 s, 17.7 MB/s 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.863 15:22:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.863 15:22:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.123 { 00:06:11.123 "nbd_device": "/dev/nbd0", 00:06:11.123 "bdev_name": "Malloc0" 00:06:11.123 }, 00:06:11.123 { 00:06:11.123 "nbd_device": "/dev/nbd1", 00:06:11.123 "bdev_name": "Malloc1" 00:06:11.123 } 00:06:11.123 ]' 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.123 { 00:06:11.123 "nbd_device": "/dev/nbd0", 00:06:11.123 "bdev_name": "Malloc0" 00:06:11.123 }, 00:06:11.123 { 00:06:11.123 "nbd_device": "/dev/nbd1", 00:06:11.123 "bdev_name": "Malloc1" 00:06:11.123 } 00:06:11.123 ]' 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.123 /dev/nbd1' 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.123 /dev/nbd1' 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.123 15:22:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.382 256+0 records in 00:06:11.383 256+0 records out 00:06:11.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443062 s, 237 MB/s 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.383 256+0 records in 00:06:11.383 256+0 records out 00:06:11.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204463 s, 51.3 MB/s 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.383 256+0 records in 00:06:11.383 256+0 records out 00:06:11.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0438176 s, 23.9 MB/s 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.383 15:22:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.642 15:22:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.643 15:22:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.902 15:22:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.162 15:22:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.162 15:22:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.162 15:22:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.422 [2024-11-26 15:22:10.760664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.422 [2024-11-26 15:22:10.783440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.422 [2024-11-26 15:22:10.783441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.422 [2024-11-26 15:22:10.824713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.422 [2024-11-26 15:22:10.824787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.720 spdk_app_start Round 1 00:06:15.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.720 15:22:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.720 15:22:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.720 15:22:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71944 /var/tmp/spdk-nbd.sock 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71944 ']' 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.720 15:22:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.720 15:22:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.720 Malloc0 00:06:15.720 15:22:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.982 Malloc1 00:06:15.982 15:22:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.982 15:22:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.982 /dev/nbd0 00:06:16.263 15:22:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.264 15:22:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.264 1+0 records in 00:06:16.264 1+0 records out 00:06:16.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345534 s, 11.9 MB/s 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.264 15:22:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.264 15:22:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.264 15:22:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.264 /dev/nbd1 00:06:16.264 15:22:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.264 15:22:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.264 1+0 records in 00:06:16.264 1+0 records out 00:06:16.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198364 s, 20.6 MB/s 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.264 15:22:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.524 15:22:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.524 15:22:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.524 { 00:06:16.524 "nbd_device": "/dev/nbd0", 00:06:16.524 "bdev_name": "Malloc0" 00:06:16.524 }, 00:06:16.524 { 00:06:16.524 "nbd_device": "/dev/nbd1", 00:06:16.524 "bdev_name": "Malloc1" 00:06:16.524 } 00:06:16.524 ]' 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.524 { 00:06:16.524 "nbd_device": "/dev/nbd0", 00:06:16.524 "bdev_name": "Malloc0" 00:06:16.524 }, 00:06:16.524 { 00:06:16.524 "nbd_device": "/dev/nbd1", 00:06:16.524 "bdev_name": "Malloc1" 00:06:16.524 } 00:06:16.524 ]' 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.524 /dev/nbd1' 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.524 /dev/nbd1' 00:06:16.524 15:22:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.784 15:22:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.784 256+0 records in 00:06:16.784 256+0 records out 00:06:16.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013714 s, 76.5 MB/s 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.784 256+0 records in 00:06:16.784 256+0 records out 00:06:16.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246035 s, 42.6 MB/s 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.784 256+0 records in 00:06:16.784 256+0 records out 00:06:16.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237203 s, 44.2 MB/s 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.784 15:22:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.044 15:22:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.303 15:22:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.303 15:22:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.563 15:22:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.824 [2024-11-26 15:22:16.258012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.824 [2024-11-26 15:22:16.294606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.824 [2024-11-26 15:22:16.294626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.084 [2024-11-26 15:22:16.370847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.084 [2024-11-26 15:22:16.370922] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.639 spdk_app_start Round 2 00:06:20.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.639 15:22:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.639 15:22:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.639 15:22:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71944 /var/tmp/spdk-nbd.sock 00:06:20.639 15:22:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71944 ']' 00:06:20.639 15:22:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.639 15:22:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.639 15:22:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.639 15:22:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.639 15:22:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.899 15:22:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.899 15:22:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.899 15:22:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.159 Malloc0 00:06:21.159 15:22:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.419 Malloc1 00:06:21.419 15:22:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.419 15:22:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.679 /dev/nbd0 00:06:21.679 15:22:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.679 15:22:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.679 15:22:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.679 1+0 records in 00:06:21.679 1+0 records out 00:06:21.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416842 s, 9.8 MB/s 00:06:21.680 15:22:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.680 15:22:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.680 15:22:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.680 15:22:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.680 15:22:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.680 15:22:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.680 15:22:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.680 15:22:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.680 /dev/nbd1 00:06:21.939 15:22:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.939 15:22:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.939 15:22:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.940 15:22:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.940 1+0 records in 00:06:21.940 1+0 records out 00:06:21.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321533 s, 12.7 MB/s 00:06:21.940 15:22:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.940 15:22:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.940 15:22:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.940 15:22:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.940 15:22:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.940 15:22:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.940 15:22:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.940 15:22:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.940 15:22:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.940 15:22:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.940 15:22:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.940 { 00:06:21.940 "nbd_device": "/dev/nbd0", 00:06:21.940 "bdev_name": "Malloc0" 00:06:21.940 }, 00:06:21.940 { 00:06:21.940 "nbd_device": "/dev/nbd1", 00:06:21.940 "bdev_name": "Malloc1" 00:06:21.940 } 00:06:21.940 ]' 00:06:21.940 15:22:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.940 { 00:06:21.940 "nbd_device": "/dev/nbd0", 00:06:21.940 "bdev_name": "Malloc0" 00:06:21.940 }, 00:06:21.940 { 00:06:21.940 "nbd_device": "/dev/nbd1", 00:06:21.940 "bdev_name": "Malloc1" 00:06:21.940 } 00:06:21.940 ]' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.200 /dev/nbd1' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.200 /dev/nbd1' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.200 256+0 records in 00:06:22.200 256+0 records out 00:06:22.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013382 s, 78.4 MB/s 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.200 256+0 records in 00:06:22.200 256+0 records out 00:06:22.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255385 s, 41.1 MB/s 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.200 256+0 records in 00:06:22.200 256+0 records out 00:06:22.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235775 s, 44.5 MB/s 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.200 15:22:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.460 15:22:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.719 15:22:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.719 15:22:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.719 15:22:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.719 15:22:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.979 15:22:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.979 15:22:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.238 15:22:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.498 [2024-11-26 15:22:21.728961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.498 [2024-11-26 15:22:21.771733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.498 [2024-11-26 15:22:21.771744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.498 [2024-11-26 15:22:21.848705] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.498 [2024-11-26 15:22:21.848781] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.034 15:22:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71944 /var/tmp/spdk-nbd.sock 00:06:26.034 15:22:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71944 ']' 00:06:26.034 15:22:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.034 15:22:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.034 15:22:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.034 15:22:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.034 15:22:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:26.294 15:22:24 event.app_repeat -- event/event.sh@39 -- # killprocess 71944 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 71944 ']' 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 71944 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71944 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71944' 00:06:26.294 killing process with pid 71944 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 71944 00:06:26.294 15:22:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 71944 00:06:26.554 spdk_app_start is called in Round 0. 00:06:26.554 Shutdown signal received, stop current app iteration 00:06:26.554 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 reinitialization... 00:06:26.554 spdk_app_start is called in Round 1. 00:06:26.554 Shutdown signal received, stop current app iteration 00:06:26.554 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 reinitialization... 00:06:26.554 spdk_app_start is called in Round 2. 00:06:26.554 Shutdown signal received, stop current app iteration 00:06:26.554 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 reinitialization... 00:06:26.554 spdk_app_start is called in Round 3. 00:06:26.554 Shutdown signal received, stop current app iteration 00:06:26.554 15:22:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.554 15:22:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:26.554 00:06:26.554 real 0m17.527s 00:06:26.554 user 0m38.393s 00:06:26.554 sys 0m2.762s 00:06:26.554 15:22:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.554 15:22:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.554 ************************************ 00:06:26.554 END TEST app_repeat 00:06:26.554 ************************************ 00:06:26.813 15:22:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.813 15:22:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.813 15:22:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.813 15:22:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.813 15:22:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.813 ************************************ 00:06:26.813 START TEST cpu_locks 00:06:26.813 ************************************ 00:06:26.813 15:22:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.813 * Looking for test storage... 00:06:26.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:26.813 15:22:25 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.813 15:22:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.813 15:22:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.072 15:22:25 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.072 15:22:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.073 15:22:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.073 --rc genhtml_branch_coverage=1 00:06:27.073 --rc genhtml_function_coverage=1 00:06:27.073 --rc genhtml_legend=1 00:06:27.073 --rc geninfo_all_blocks=1 00:06:27.073 --rc geninfo_unexecuted_blocks=1 00:06:27.073 00:06:27.073 ' 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.073 --rc genhtml_branch_coverage=1 00:06:27.073 --rc genhtml_function_coverage=1 00:06:27.073 --rc genhtml_legend=1 00:06:27.073 --rc geninfo_all_blocks=1 00:06:27.073 --rc geninfo_unexecuted_blocks=1 00:06:27.073 00:06:27.073 ' 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.073 --rc genhtml_branch_coverage=1 00:06:27.073 --rc genhtml_function_coverage=1 00:06:27.073 --rc genhtml_legend=1 00:06:27.073 --rc geninfo_all_blocks=1 00:06:27.073 --rc geninfo_unexecuted_blocks=1 00:06:27.073 00:06:27.073 ' 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.073 --rc genhtml_branch_coverage=1 00:06:27.073 --rc genhtml_function_coverage=1 00:06:27.073 --rc genhtml_legend=1 00:06:27.073 --rc geninfo_all_blocks=1 00:06:27.073 --rc geninfo_unexecuted_blocks=1 00:06:27.073 00:06:27.073 ' 00:06:27.073 15:22:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.073 15:22:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.073 15:22:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.073 15:22:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.073 15:22:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.073 ************************************ 00:06:27.073 START TEST default_locks 00:06:27.073 ************************************ 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72371 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72371 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72371 ']' 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.073 15:22:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.073 [2024-11-26 15:22:25.425471] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:27.073 [2024-11-26 15:22:25.425655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72371 ] 00:06:27.333 [2024-11-26 15:22:25.563253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.333 [2024-11-26 15:22:25.602930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.333 [2024-11-26 15:22:25.646967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.901 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.901 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:27.901 15:22:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72371 00:06:27.901 15:22:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72371 00:06:27.901 15:22:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72371 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72371 ']' 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72371 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72371 00:06:28.159 killing process with pid 72371 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72371' 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72371 00:06:28.159 15:22:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72371 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72371 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72371 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72371 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72371 ']' 00:06:28.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.744 ERROR: process (pid: 72371) is no longer running 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.744 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72371) - No such process 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.744 00:06:28.744 real 0m1.853s 00:06:28.744 user 0m1.664s 00:06:28.744 sys 0m0.700s 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.744 15:22:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.744 ************************************ 00:06:28.744 END TEST default_locks 00:06:28.744 ************************************ 00:06:29.017 15:22:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.017 15:22:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.017 15:22:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.017 15:22:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.017 ************************************ 00:06:29.017 START TEST default_locks_via_rpc 00:06:29.017 ************************************ 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72424 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72424 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72424 ']' 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.017 15:22:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.017 [2024-11-26 15:22:27.344612] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:29.017 [2024-11-26 15:22:27.344732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72424 ] 00:06:29.017 [2024-11-26 15:22:27.479595] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.276 [2024-11-26 15:22:27.516408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.277 [2024-11-26 15:22:27.558442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72424 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72424 00:06:29.846 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72424 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72424 ']' 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72424 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72424 00:06:30.105 killing process with pid 72424 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72424' 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72424 00:06:30.105 15:22:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72424 00:06:30.675 ************************************ 00:06:30.675 END TEST default_locks_via_rpc 00:06:30.675 ************************************ 00:06:30.675 00:06:30.675 real 0m1.823s 00:06:30.675 user 0m1.632s 00:06:30.675 sys 0m0.684s 00:06:30.675 15:22:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.675 15:22:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.675 15:22:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.675 15:22:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.675 15:22:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.675 15:22:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.675 ************************************ 00:06:30.675 START TEST non_locking_app_on_locked_coremask 00:06:30.675 ************************************ 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72476 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72476 /var/tmp/spdk.sock 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72476 ']' 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.675 15:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 [2024-11-26 15:22:29.246825] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:30.933 [2024-11-26 15:22:29.246974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72476 ] 00:06:30.933 [2024-11-26 15:22:29.387615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.192 [2024-11-26 15:22:29.427491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.192 [2024-11-26 15:22:29.466600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72493 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72493 /var/tmp/spdk2.sock 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72493 ']' 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.760 15:22:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.760 [2024-11-26 15:22:30.110656] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:31.760 [2024-11-26 15:22:30.110891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:06:32.019 [2024-11-26 15:22:30.246782] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.019 [2024-11-26 15:22:30.280062] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.019 [2024-11-26 15:22:30.280107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.019 [2024-11-26 15:22:30.365413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.587 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.587 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.587 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72476 00:06:32.587 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72476 00:06:32.587 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72476 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72476 ']' 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72476 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72476 00:06:33.527 killing process with pid 72476 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72476' 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72476 00:06:33.527 15:22:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72476 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72493 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72493 ']' 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72493 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72493 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72493' 00:06:34.906 killing process with pid 72493 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72493 00:06:34.906 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72493 00:06:35.475 ************************************ 00:06:35.475 END TEST non_locking_app_on_locked_coremask 00:06:35.475 ************************************ 00:06:35.475 00:06:35.475 real 0m4.518s 00:06:35.475 user 0m4.379s 00:06:35.475 sys 0m1.479s 00:06:35.475 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.475 15:22:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.475 15:22:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.475 15:22:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.475 15:22:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.475 15:22:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.475 ************************************ 00:06:35.475 START TEST locking_app_on_unlocked_coremask 00:06:35.475 ************************************ 00:06:35.475 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:35.475 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72564 00:06:35.475 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.476 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72564 /var/tmp/spdk.sock 00:06:35.476 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72564 ']' 00:06:35.476 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.476 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.476 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.476 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.476 15:22:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.476 [2024-11-26 15:22:33.842776] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:35.476 [2024-11-26 15:22:33.842917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72564 ] 00:06:35.735 [2024-11-26 15:22:33.986447] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.735 [2024-11-26 15:22:34.025927] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.735 [2024-11-26 15:22:34.025976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.735 [2024-11-26 15:22:34.068032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72580 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72580 /var/tmp/spdk2.sock 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72580 ']' 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.304 15:22:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.304 [2024-11-26 15:22:34.704730] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:36.304 [2024-11-26 15:22:34.704947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72580 ] 00:06:36.563 [2024-11-26 15:22:34.840795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.563 [2024-11-26 15:22:34.874006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.563 [2024-11-26 15:22:34.961525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.133 15:22:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.133 15:22:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.133 15:22:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72580 00:06:37.133 15:22:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72580 00:06:37.133 15:22:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72564 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72564 ']' 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72564 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72564 00:06:38.072 killing process with pid 72564 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72564' 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72564 00:06:38.072 15:22:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72564 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72580 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72580 ']' 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72580 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72580 00:06:38.639 killing process with pid 72580 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72580' 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72580 00:06:38.639 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72580 00:06:39.214 00:06:39.214 real 0m3.694s 00:06:39.214 user 0m3.656s 00:06:39.214 sys 0m1.390s 00:06:39.214 ************************************ 00:06:39.214 END TEST locking_app_on_unlocked_coremask 00:06:39.214 ************************************ 00:06:39.214 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.214 15:22:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.214 15:22:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.214 15:22:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.214 15:22:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.214 15:22:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.215 ************************************ 00:06:39.215 START TEST locking_app_on_locked_coremask 00:06:39.215 ************************************ 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72649 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72649 /var/tmp/spdk.sock 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72649 ']' 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.215 15:22:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.215 [2024-11-26 15:22:37.582480] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:39.215 [2024-11-26 15:22:37.582711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72649 ] 00:06:39.484 [2024-11-26 15:22:37.718333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.484 [2024-11-26 15:22:37.756654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.484 [2024-11-26 15:22:37.781234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72665 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72665 /var/tmp/spdk2.sock 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72665 /var/tmp/spdk2.sock 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72665 /var/tmp/spdk2.sock 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72665 ']' 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.052 15:22:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.052 [2024-11-26 15:22:38.483150] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:40.052 [2024-11-26 15:22:38.483368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:06:40.312 [2024-11-26 15:22:38.617124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.312 [2024-11-26 15:22:38.649625] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72649 has claimed it. 00:06:40.312 [2024-11-26 15:22:38.649694] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.890 ERROR: process (pid: 72665) is no longer running 00:06:40.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72665) - No such process 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72649 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72649 00:06:40.890 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72649 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72649 ']' 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72649 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72649 00:06:41.149 killing process with pid 72649 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72649' 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72649 00:06:41.149 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72649 00:06:41.720 00:06:41.720 real 0m2.450s 00:06:41.720 user 0m2.658s 00:06:41.720 sys 0m0.726s 00:06:41.720 ************************************ 00:06:41.720 END TEST locking_app_on_locked_coremask 00:06:41.720 ************************************ 00:06:41.720 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.720 15:22:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.720 15:22:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:41.720 15:22:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.720 15:22:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.720 15:22:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.720 ************************************ 00:06:41.720 START TEST locking_overlapped_coremask 00:06:41.720 ************************************ 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72707 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72707 /var/tmp/spdk.sock 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72707 ']' 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.720 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.720 [2024-11-26 15:22:40.100560] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:41.720 [2024-11-26 15:22:40.100781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72707 ] 00:06:41.980 [2024-11-26 15:22:40.236652] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.980 [2024-11-26 15:22:40.276644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.980 [2024-11-26 15:22:40.303875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.980 [2024-11-26 15:22:40.303965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.980 [2024-11-26 15:22:40.304098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72725 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72725 /var/tmp/spdk2.sock 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72725 /var/tmp/spdk2.sock 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72725 /var/tmp/spdk2.sock 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72725 ']' 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.550 15:22:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.550 [2024-11-26 15:22:40.986841] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:42.550 [2024-11-26 15:22:40.987063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72725 ] 00:06:42.810 [2024-11-26 15:22:41.124324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.810 [2024-11-26 15:22:41.156315] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72707 has claimed it. 00:06:42.810 [2024-11-26 15:22:41.156369] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.380 ERROR: process (pid: 72725) is no longer running 00:06:43.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72725) - No such process 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72707 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72707 ']' 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72707 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72707 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72707' 00:06:43.380 killing process with pid 72707 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72707 00:06:43.380 15:22:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72707 00:06:43.640 00:06:43.640 real 0m2.036s 00:06:43.640 user 0m5.447s 00:06:43.640 sys 0m0.497s 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.640 ************************************ 00:06:43.640 END TEST locking_overlapped_coremask 00:06:43.640 ************************************ 00:06:43.640 15:22:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.640 15:22:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.640 15:22:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.640 15:22:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.640 ************************************ 00:06:43.640 START TEST locking_overlapped_coremask_via_rpc 00:06:43.640 ************************************ 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72767 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72767 /var/tmp/spdk.sock 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72767 ']' 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.640 15:22:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.900 [2024-11-26 15:22:42.204308] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:43.900 [2024-11-26 15:22:42.204442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72767 ] 00:06:43.900 [2024-11-26 15:22:42.339869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.900 [2024-11-26 15:22:42.370971] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.900 [2024-11-26 15:22:42.371006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.159 [2024-11-26 15:22:42.397884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.159 [2024-11-26 15:22:42.397978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.159 [2024-11-26 15:22:42.398089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72787 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72787 /var/tmp/spdk2.sock 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72787 ']' 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.731 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.731 [2024-11-26 15:22:43.087157] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:44.731 [2024-11-26 15:22:43.087374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72787 ] 00:06:44.992 [2024-11-26 15:22:43.223599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.992 [2024-11-26 15:22:43.254567] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.992 [2024-11-26 15:22:43.254610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.992 [2024-11-26 15:22:43.312317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.992 [2024-11-26 15:22:43.319352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.992 [2024-11-26 15:22:43.319479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.562 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.562 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.562 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.563 [2024-11-26 15:22:43.940368] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72767 has claimed it. 00:06:45.563 request: 00:06:45.563 { 00:06:45.563 "method": "framework_enable_cpumask_locks", 00:06:45.563 "req_id": 1 00:06:45.563 } 00:06:45.563 Got JSON-RPC error response 00:06:45.563 response: 00:06:45.563 { 00:06:45.563 "code": -32603, 00:06:45.563 "message": "Failed to claim CPU core: 2" 00:06:45.563 } 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72767 /var/tmp/spdk.sock 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72767 ']' 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.563 15:22:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72787 /var/tmp/spdk2.sock 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72787 ']' 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.823 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.084 00:06:46.084 real 0m2.237s 00:06:46.084 user 0m1.006s 00:06:46.084 sys 0m0.173s 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.084 15:22:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.084 ************************************ 00:06:46.084 END TEST locking_overlapped_coremask_via_rpc 00:06:46.084 ************************************ 00:06:46.084 15:22:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.084 15:22:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72767 ]] 00:06:46.084 15:22:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72767 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72767 ']' 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72767 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72767 00:06:46.084 killing process with pid 72767 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72767' 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72767 00:06:46.084 15:22:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72767 00:06:46.654 15:22:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72787 ]] 00:06:46.654 15:22:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72787 00:06:46.654 15:22:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72787 ']' 00:06:46.654 15:22:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72787 00:06:46.654 15:22:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:46.654 15:22:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.654 15:22:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72787 00:06:46.654 killing process with pid 72787 00:06:46.654 15:22:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:46.654 15:22:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:46.655 15:22:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72787' 00:06:46.655 15:22:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72787 00:06:46.655 15:22:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72787 00:06:47.225 15:22:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.225 15:22:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:47.225 Process with pid 72767 is not found 00:06:47.225 Process with pid 72787 is not found 00:06:47.225 15:22:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72767 ]] 00:06:47.225 15:22:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72767 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72767 ']' 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72767 00:06:47.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72767) - No such process 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72767 is not found' 00:06:47.225 15:22:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72787 ]] 00:06:47.225 15:22:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72787 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72787 ']' 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72787 00:06:47.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72787) - No such process 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72787 is not found' 00:06:47.225 15:22:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.225 00:06:47.225 real 0m20.364s 00:06:47.225 user 0m32.551s 00:06:47.225 sys 0m6.728s 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.225 15:22:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.225 ************************************ 00:06:47.225 END TEST cpu_locks 00:06:47.225 ************************************ 00:06:47.225 00:06:47.225 real 0m48.458s 00:06:47.225 user 1m30.390s 00:06:47.225 sys 0m10.684s 00:06:47.225 15:22:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.225 15:22:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.225 ************************************ 00:06:47.225 END TEST event 00:06:47.225 ************************************ 00:06:47.225 15:22:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.225 15:22:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.225 15:22:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.225 15:22:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.225 ************************************ 00:06:47.225 START TEST thread 00:06:47.225 ************************************ 00:06:47.225 15:22:45 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.225 * Looking for test storage... 00:06:47.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.486 15:22:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.486 15:22:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.486 15:22:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.486 15:22:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.486 15:22:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.486 15:22:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.486 15:22:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.486 15:22:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.486 15:22:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.486 15:22:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.486 15:22:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.486 15:22:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:47.486 15:22:45 thread -- scripts/common.sh@345 -- # : 1 00:06:47.486 15:22:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.486 15:22:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.486 15:22:45 thread -- scripts/common.sh@365 -- # decimal 1 00:06:47.486 15:22:45 thread -- scripts/common.sh@353 -- # local d=1 00:06:47.486 15:22:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.486 15:22:45 thread -- scripts/common.sh@355 -- # echo 1 00:06:47.486 15:22:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.486 15:22:45 thread -- scripts/common.sh@366 -- # decimal 2 00:06:47.486 15:22:45 thread -- scripts/common.sh@353 -- # local d=2 00:06:47.486 15:22:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.486 15:22:45 thread -- scripts/common.sh@355 -- # echo 2 00:06:47.486 15:22:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.486 15:22:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.486 15:22:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.486 15:22:45 thread -- scripts/common.sh@368 -- # return 0 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.486 --rc genhtml_branch_coverage=1 00:06:47.486 --rc genhtml_function_coverage=1 00:06:47.486 --rc genhtml_legend=1 00:06:47.486 --rc geninfo_all_blocks=1 00:06:47.486 --rc geninfo_unexecuted_blocks=1 00:06:47.486 00:06:47.486 ' 00:06:47.486 15:22:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.486 15:22:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.486 ************************************ 00:06:47.486 START TEST thread_poller_perf 00:06:47.486 ************************************ 00:06:47.486 15:22:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.486 [2024-11-26 15:22:45.863423] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:47.486 [2024-11-26 15:22:45.863638] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72925 ] 00:06:47.746 [2024-11-26 15:22:45.996805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.746 [2024-11-26 15:22:46.036277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.746 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.746 [2024-11-26 15:22:46.079508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.704 [2024-11-26T15:22:47.183Z] ====================================== 00:06:48.704 [2024-11-26T15:22:47.183Z] busy:2302650470 (cyc) 00:06:48.704 [2024-11-26T15:22:47.183Z] total_run_count: 411000 00:06:48.704 [2024-11-26T15:22:47.183Z] tsc_hz: 2294600000 (cyc) 00:06:48.704 [2024-11-26T15:22:47.183Z] ====================================== 00:06:48.704 [2024-11-26T15:22:47.183Z] poller_cost: 5602 (cyc), 2441 (nsec) 00:06:48.704 ************************************ 00:06:48.704 END TEST thread_poller_perf 00:06:48.704 ************************************ 00:06:48.704 00:06:48.704 real 0m1.339s 00:06:48.704 user 0m1.123s 00:06:48.704 sys 0m0.109s 00:06:48.704 15:22:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.704 15:22:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 15:22:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.964 15:22:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:48.964 15:22:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.964 15:22:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 ************************************ 00:06:48.964 START TEST thread_poller_perf 00:06:48.964 ************************************ 00:06:48.965 15:22:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.965 [2024-11-26 15:22:47.267011] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:48.965 [2024-11-26 15:22:47.267159] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72956 ] 00:06:48.965 [2024-11-26 15:22:47.399661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.965 [2024-11-26 15:22:47.437092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.224 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.224 [2024-11-26 15:22:47.462163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.163 [2024-11-26T15:22:48.642Z] ====================================== 00:06:50.163 [2024-11-26T15:22:48.642Z] busy:2298475910 (cyc) 00:06:50.163 [2024-11-26T15:22:48.642Z] total_run_count: 5629000 00:06:50.163 [2024-11-26T15:22:48.642Z] tsc_hz: 2294600000 (cyc) 00:06:50.163 [2024-11-26T15:22:48.642Z] ====================================== 00:06:50.163 [2024-11-26T15:22:48.642Z] poller_cost: 408 (cyc), 177 (nsec) 00:06:50.163 ************************************ 00:06:50.163 END TEST thread_poller_perf 00:06:50.163 ************************************ 00:06:50.163 00:06:50.163 real 0m1.311s 00:06:50.163 user 0m1.107s 00:06:50.163 sys 0m0.098s 00:06:50.163 15:22:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.163 15:22:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.163 15:22:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:50.163 ************************************ 00:06:50.163 END TEST thread 00:06:50.163 ************************************ 00:06:50.163 00:06:50.163 real 0m3.009s 00:06:50.163 user 0m2.385s 00:06:50.163 sys 0m0.425s 00:06:50.163 15:22:48 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.163 15:22:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.423 15:22:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:50.423 15:22:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:50.423 15:22:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.423 15:22:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.423 15:22:48 -- common/autotest_common.sh@10 -- # set +x 00:06:50.423 ************************************ 00:06:50.423 START TEST app_cmdline 00:06:50.423 ************************************ 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:50.423 * Looking for test storage... 00:06:50.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.423 15:22:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.423 --rc genhtml_branch_coverage=1 00:06:50.423 --rc genhtml_function_coverage=1 00:06:50.423 --rc genhtml_legend=1 00:06:50.423 --rc geninfo_all_blocks=1 00:06:50.423 --rc geninfo_unexecuted_blocks=1 00:06:50.423 00:06:50.423 ' 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.423 --rc genhtml_branch_coverage=1 00:06:50.423 --rc genhtml_function_coverage=1 00:06:50.423 --rc genhtml_legend=1 00:06:50.423 --rc geninfo_all_blocks=1 00:06:50.423 --rc geninfo_unexecuted_blocks=1 00:06:50.423 00:06:50.423 ' 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.423 --rc genhtml_branch_coverage=1 00:06:50.423 --rc genhtml_function_coverage=1 00:06:50.423 --rc genhtml_legend=1 00:06:50.423 --rc geninfo_all_blocks=1 00:06:50.423 --rc geninfo_unexecuted_blocks=1 00:06:50.423 00:06:50.423 ' 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.423 --rc genhtml_branch_coverage=1 00:06:50.423 --rc genhtml_function_coverage=1 00:06:50.423 --rc genhtml_legend=1 00:06:50.423 --rc geninfo_all_blocks=1 00:06:50.423 --rc geninfo_unexecuted_blocks=1 00:06:50.423 00:06:50.423 ' 00:06:50.423 15:22:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:50.423 15:22:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73045 00:06:50.423 15:22:48 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:50.423 15:22:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73045 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73045 ']' 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.423 15:22:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.683 [2024-11-26 15:22:48.947400] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:50.683 [2024-11-26 15:22:48.947619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73045 ] 00:06:50.683 [2024-11-26 15:22:49.082366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.683 [2024-11-26 15:22:49.122062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.683 [2024-11-26 15:22:49.146707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.620 15:22:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.620 15:22:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:51.620 15:22:49 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:51.620 { 00:06:51.620 "version": "SPDK v25.01-pre git sha1 2a91567e4", 00:06:51.620 "fields": { 00:06:51.620 "major": 25, 00:06:51.620 "minor": 1, 00:06:51.620 "patch": 0, 00:06:51.620 "suffix": "-pre", 00:06:51.620 "commit": "2a91567e4" 00:06:51.620 } 00:06:51.620 } 00:06:51.620 15:22:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.620 15:22:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.620 15:22:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.621 15:22:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.621 15:22:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.621 15:22:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:51.621 15:22:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.621 15:22:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.621 15:22:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.621 15:22:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:51.621 15:22:49 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.881 request: 00:06:51.881 { 00:06:51.881 "method": "env_dpdk_get_mem_stats", 00:06:51.881 "req_id": 1 00:06:51.881 } 00:06:51.881 Got JSON-RPC error response 00:06:51.881 response: 00:06:51.881 { 00:06:51.881 "code": -32601, 00:06:51.881 "message": "Method not found" 00:06:51.881 } 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.881 15:22:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73045 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73045 ']' 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73045 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73045 00:06:51.881 killing process with pid 73045 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73045' 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 73045 00:06:51.881 15:22:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 73045 00:06:52.141 00:06:52.141 real 0m1.922s 00:06:52.141 user 0m2.130s 00:06:52.141 sys 0m0.520s 00:06:52.141 15:22:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.141 15:22:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.141 ************************************ 00:06:52.141 END TEST app_cmdline 00:06:52.141 ************************************ 00:06:52.401 15:22:50 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:52.401 15:22:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.401 15:22:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.401 15:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.401 ************************************ 00:06:52.401 START TEST version 00:06:52.401 ************************************ 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:52.401 * Looking for test storage... 00:06:52.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.401 15:22:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.401 15:22:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.401 15:22:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.401 15:22:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.401 15:22:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.401 15:22:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.401 15:22:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.401 15:22:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.401 15:22:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.401 15:22:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.401 15:22:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.401 15:22:50 version -- scripts/common.sh@344 -- # case "$op" in 00:06:52.401 15:22:50 version -- scripts/common.sh@345 -- # : 1 00:06:52.401 15:22:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.401 15:22:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.401 15:22:50 version -- scripts/common.sh@365 -- # decimal 1 00:06:52.401 15:22:50 version -- scripts/common.sh@353 -- # local d=1 00:06:52.401 15:22:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.401 15:22:50 version -- scripts/common.sh@355 -- # echo 1 00:06:52.401 15:22:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.401 15:22:50 version -- scripts/common.sh@366 -- # decimal 2 00:06:52.401 15:22:50 version -- scripts/common.sh@353 -- # local d=2 00:06:52.401 15:22:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.401 15:22:50 version -- scripts/common.sh@355 -- # echo 2 00:06:52.401 15:22:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.401 15:22:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.401 15:22:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.401 15:22:50 version -- scripts/common.sh@368 -- # return 0 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.401 --rc genhtml_branch_coverage=1 00:06:52.401 --rc genhtml_function_coverage=1 00:06:52.401 --rc genhtml_legend=1 00:06:52.401 --rc geninfo_all_blocks=1 00:06:52.401 --rc geninfo_unexecuted_blocks=1 00:06:52.401 00:06:52.401 ' 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.401 --rc genhtml_branch_coverage=1 00:06:52.401 --rc genhtml_function_coverage=1 00:06:52.401 --rc genhtml_legend=1 00:06:52.401 --rc geninfo_all_blocks=1 00:06:52.401 --rc geninfo_unexecuted_blocks=1 00:06:52.401 00:06:52.401 ' 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.401 --rc genhtml_branch_coverage=1 00:06:52.401 --rc genhtml_function_coverage=1 00:06:52.401 --rc genhtml_legend=1 00:06:52.401 --rc geninfo_all_blocks=1 00:06:52.401 --rc geninfo_unexecuted_blocks=1 00:06:52.401 00:06:52.401 ' 00:06:52.401 15:22:50 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.401 --rc genhtml_branch_coverage=1 00:06:52.401 --rc genhtml_function_coverage=1 00:06:52.401 --rc genhtml_legend=1 00:06:52.401 --rc geninfo_all_blocks=1 00:06:52.401 --rc geninfo_unexecuted_blocks=1 00:06:52.401 00:06:52.401 ' 00:06:52.401 15:22:50 version -- app/version.sh@17 -- # get_header_version major 00:06:52.661 15:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # cut -f2 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.661 15:22:50 version -- app/version.sh@17 -- # major=25 00:06:52.661 15:22:50 version -- app/version.sh@18 -- # get_header_version minor 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # cut -f2 00:06:52.661 15:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.661 15:22:50 version -- app/version.sh@18 -- # minor=1 00:06:52.661 15:22:50 version -- app/version.sh@19 -- # get_header_version patch 00:06:52.661 15:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # cut -f2 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.661 15:22:50 version -- app/version.sh@19 -- # patch=0 00:06:52.661 15:22:50 version -- app/version.sh@20 -- # get_header_version suffix 00:06:52.661 15:22:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # cut -f2 00:06:52.661 15:22:50 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.661 15:22:50 version -- app/version.sh@20 -- # suffix=-pre 00:06:52.661 15:22:50 version -- app/version.sh@22 -- # version=25.1 00:06:52.661 15:22:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:52.661 15:22:50 version -- app/version.sh@28 -- # version=25.1rc0 00:06:52.661 15:22:50 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:52.661 15:22:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:52.661 15:22:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:52.661 15:22:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:52.661 00:06:52.661 real 0m0.315s 00:06:52.661 user 0m0.186s 00:06:52.661 sys 0m0.182s 00:06:52.661 15:22:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.661 ************************************ 00:06:52.661 END TEST version 00:06:52.661 ************************************ 00:06:52.661 15:22:50 version -- common/autotest_common.sh@10 -- # set +x 00:06:52.661 15:22:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:52.661 15:22:51 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:52.661 15:22:51 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:52.661 15:22:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.661 15:22:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.661 15:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:52.661 ************************************ 00:06:52.661 START TEST bdev_raid 00:06:52.661 ************************************ 00:06:52.661 15:22:51 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:52.661 * Looking for test storage... 00:06:52.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.922 15:22:51 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.922 --rc genhtml_branch_coverage=1 00:06:52.922 --rc genhtml_function_coverage=1 00:06:52.922 --rc genhtml_legend=1 00:06:52.922 --rc geninfo_all_blocks=1 00:06:52.922 --rc geninfo_unexecuted_blocks=1 00:06:52.922 00:06:52.922 ' 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.922 --rc genhtml_branch_coverage=1 00:06:52.922 --rc genhtml_function_coverage=1 00:06:52.922 --rc genhtml_legend=1 00:06:52.922 --rc geninfo_all_blocks=1 00:06:52.922 --rc geninfo_unexecuted_blocks=1 00:06:52.922 00:06:52.922 ' 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.922 --rc genhtml_branch_coverage=1 00:06:52.922 --rc genhtml_function_coverage=1 00:06:52.922 --rc genhtml_legend=1 00:06:52.922 --rc geninfo_all_blocks=1 00:06:52.922 --rc geninfo_unexecuted_blocks=1 00:06:52.922 00:06:52.922 ' 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.922 --rc genhtml_branch_coverage=1 00:06:52.922 --rc genhtml_function_coverage=1 00:06:52.922 --rc genhtml_legend=1 00:06:52.922 --rc geninfo_all_blocks=1 00:06:52.922 --rc geninfo_unexecuted_blocks=1 00:06:52.922 00:06:52.922 ' 00:06:52.922 15:22:51 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:52.922 15:22:51 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.922 15:22:51 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:52.922 15:22:51 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:52.922 15:22:51 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:52.922 15:22:51 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:52.922 15:22:51 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.922 15:22:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.922 ************************************ 00:06:52.922 START TEST raid1_resize_data_offset_test 00:06:52.922 ************************************ 00:06:52.922 Process raid pid: 73205 00:06:52.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.922 15:22:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:52.922 15:22:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=73205 00:06:52.922 15:22:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 73205' 00:06:52.922 15:22:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 73205 00:06:52.922 15:22:51 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.923 15:22:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 73205 ']' 00:06:52.923 15:22:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.923 15:22:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.923 15:22:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.923 15:22:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.923 15:22:51 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.923 [2024-11-26 15:22:51.351641] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:52.923 [2024-11-26 15:22:51.351859] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.183 [2024-11-26 15:22:51.487119] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.183 [2024-11-26 15:22:51.522618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.183 [2024-11-26 15:22:51.548375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.183 [2024-11-26 15:22:51.590826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.183 [2024-11-26 15:22:51.590937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.753 malloc0 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.753 malloc1 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.753 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.013 null0 00:06:54.013 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.013 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 [2024-11-26 15:22:52.238643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:54.014 [2024-11-26 15:22:52.240478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:54.014 [2024-11-26 15:22:52.240582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:54.014 [2024-11-26 15:22:52.240734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:06:54.014 [2024-11-26 15:22:52.240793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:54.014 [2024-11-26 15:22:52.241065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:06:54.014 [2024-11-26 15:22:52.241251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:06:54.014 [2024-11-26 15:22:52.241294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:06:54.014 [2024-11-26 15:22:52.241455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 [2024-11-26 15:22:52.290625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 malloc2 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 [2024-11-26 15:22:52.417585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:54.014 [2024-11-26 15:22:52.422820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.014 [2024-11-26 15:22:52.424668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 73205 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 73205 ']' 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 73205 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.014 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73205 00:06:54.274 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.274 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.274 killing process with pid 73205 00:06:54.274 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73205' 00:06:54.274 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 73205 00:06:54.274 [2024-11-26 15:22:52.520072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.274 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 73205 00:06:54.274 [2024-11-26 15:22:52.520810] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:54.274 [2024-11-26 15:22:52.520871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.274 [2024-11-26 15:22:52.520892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:54.274 [2024-11-26 15:22:52.527036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.274 [2024-11-26 15:22:52.527332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.274 [2024-11-26 15:22:52.527346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:06:54.274 [2024-11-26 15:22:52.735830] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.534 15:22:52 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:54.534 00:06:54.534 real 0m1.672s 00:06:54.534 user 0m1.669s 00:06:54.534 sys 0m0.423s 00:06:54.534 ************************************ 00:06:54.534 END TEST raid1_resize_data_offset_test 00:06:54.534 ************************************ 00:06:54.535 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.535 15:22:52 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.535 15:22:52 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:54.535 15:22:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.535 15:22:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.535 15:22:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.535 ************************************ 00:06:54.535 START TEST raid0_resize_superblock_test 00:06:54.535 ************************************ 00:06:54.535 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:54.535 15:22:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:54.535 15:22:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73261 00:06:54.535 15:22:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73261' 00:06:54.795 Process raid pid: 73261 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73261 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73261 ']' 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.795 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.795 [2024-11-26 15:22:53.090359] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:54.795 [2024-11-26 15:22:53.090572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.795 [2024-11-26 15:22:53.225733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.795 [2024-11-26 15:22:53.261117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.055 [2024-11-26 15:22:53.287254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.055 [2024-11-26 15:22:53.329887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.055 [2024-11-26 15:22:53.329924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.624 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.624 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:55.624 15:22:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:55.624 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.624 15:22:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.624 malloc0 00:06:55.624 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.624 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:55.624 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.624 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.624 [2024-11-26 15:22:54.025597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:55.624 [2024-11-26 15:22:54.025721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.625 [2024-11-26 15:22:54.025771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:55.625 [2024-11-26 15:22:54.025803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.625 [2024-11-26 15:22:54.027835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.625 [2024-11-26 15:22:54.027901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:55.625 pt0 00:06:55.625 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.625 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:55.625 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.625 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 95c325c7-627a-4d92-b2b5-6666eafab0dd 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 aed333be-50c8-47cf-ab3a-cc4b78b23942 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 d0186795-35d2-40a2-a00e-4e18a3619a73 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 [2024-11-26 15:22:54.160447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev aed333be-50c8-47cf-ab3a-cc4b78b23942 is claimed 00:06:55.884 [2024-11-26 15:22:54.160536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d0186795-35d2-40a2-a00e-4e18a3619a73 is claimed 00:06:55.884 [2024-11-26 15:22:54.160651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:06:55.884 [2024-11-26 15:22:54.160661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:55.884 [2024-11-26 15:22:54.160905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:06:55.884 [2024-11-26 15:22:54.161043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:06:55.884 [2024-11-26 15:22:54.161056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:06:55.884 [2024-11-26 15:22:54.161203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 [2024-11-26 15:22:54.272701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.884 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.884 [2024-11-26 15:22:54.316634] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.885 [2024-11-26 15:22:54.316665] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'aed333be-50c8-47cf-ab3a-cc4b78b23942' was resized: old size 131072, new size 204800 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.885 [2024-11-26 15:22:54.328563] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.885 [2024-11-26 15:22:54.328587] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd0186795-35d2-40a2-a00e-4e18a3619a73' was resized: old size 131072, new size 204800 00:06:55.885 [2024-11-26 15:22:54.328610] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:55.885 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.144 [2024-11-26 15:22:54.436694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.144 [2024-11-26 15:22:54.484527] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:56.144 [2024-11-26 15:22:54.484666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:56.144 [2024-11-26 15:22:54.484695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.144 [2024-11-26 15:22:54.484731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:56.144 [2024-11-26 15:22:54.484846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.144 [2024-11-26 15:22:54.484919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.144 [2024-11-26 15:22:54.484961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.144 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.144 [2024-11-26 15:22:54.496483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:56.144 [2024-11-26 15:22:54.496533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.144 [2024-11-26 15:22:54.496563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:56.144 [2024-11-26 15:22:54.496572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.144 [2024-11-26 15:22:54.498683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.144 [2024-11-26 15:22:54.498719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:56.144 [2024-11-26 15:22:54.500151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev aed333be-50c8-47cf-ab3a-cc4b78b23942 00:06:56.145 [2024-11-26 15:22:54.500266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev aed333be-50c8-47cf-ab3a-cc4b78b23942 is claimed 00:06:56.145 [2024-11-26 15:22:54.500368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d0186795-35d2-40a2-a00e-4e18a3619a73 00:06:56.145 [2024-11-26 15:22:54.500394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d0186795-35d2-40a2-a00e-4e18a3619a73 is claimed 00:06:56.145 [2024-11-26 15:22:54.500480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d0186795-35d2-40a2-a00e-4e18a3619a73 (2) smaller than existing raid bdev Raid (3) 00:06:56.145 [2024-11-26 15:22:54.500495] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev aed333be-50c8-47cf-ab3a-cc4b78b23942: File exists 00:06:56.145 [2024-11-26 15:22:54.500533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.145 [2024-11-26 15:22:54.500562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:56.145 [2024-11-26 15:22:54.500786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:06:56.145 [2024-11-26 15:22:54.500896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.145 [2024-11-26 15:22:54.500907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:56.145 [2024-11-26 15:22:54.501049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.145 pt0 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.145 [2024-11-26 15:22:54.524800] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73261 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73261 ']' 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73261 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73261 00:06:56.145 killing process with pid 73261 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73261' 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73261 00:06:56.145 [2024-11-26 15:22:54.606065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.145 [2024-11-26 15:22:54.606146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.145 [2024-11-26 15:22:54.606206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.145 [2024-11-26 15:22:54.606218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:56.145 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73261 00:06:56.404 [2024-11-26 15:22:54.763672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.663 15:22:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:56.663 00:06:56.663 real 0m1.968s 00:06:56.663 user 0m2.270s 00:06:56.663 sys 0m0.483s 00:06:56.663 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.663 ************************************ 00:06:56.663 END TEST raid0_resize_superblock_test 00:06:56.663 ************************************ 00:06:56.663 15:22:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.663 15:22:55 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:56.663 15:22:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.663 15:22:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.663 15:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.663 ************************************ 00:06:56.663 START TEST raid1_resize_superblock_test 00:06:56.663 ************************************ 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73332 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73332' 00:06:56.663 Process raid pid: 73332 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73332 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73332 ']' 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.663 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.663 [2024-11-26 15:22:55.120124] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:56.663 [2024-11-26 15:22:55.120361] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.921 [2024-11-26 15:22:55.255513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.921 [2024-11-26 15:22:55.292545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.921 [2024-11-26 15:22:55.318238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.921 [2024-11-26 15:22:55.360611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.921 [2024-11-26 15:22:55.360719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.488 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.488 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:57.488 15:22:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:57.488 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.488 15:22:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.747 malloc0 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.747 [2024-11-26 15:22:56.050490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:57.747 [2024-11-26 15:22:56.050546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.747 [2024-11-26 15:22:56.050572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:57.747 [2024-11-26 15:22:56.050584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.747 [2024-11-26 15:22:56.052708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.747 [2024-11-26 15:22:56.052735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:57.747 pt0 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.747 5a74d2d4-7d76-49b7-8205-ccb85574fe09 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.747 2d73163d-bcb2-4d00-bdcb-132f2998e08c 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.747 f97a18e6-27e1-465e-9619-1edc073beab6 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.747 [2024-11-26 15:22:56.182878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2d73163d-bcb2-4d00-bdcb-132f2998e08c is claimed 00:06:57.747 [2024-11-26 15:22:56.182954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f97a18e6-27e1-465e-9619-1edc073beab6 is claimed 00:06:57.747 [2024-11-26 15:22:56.183061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:06:57.747 [2024-11-26 15:22:56.183072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:57.747 [2024-11-26 15:22:56.183345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:06:57.747 [2024-11-26 15:22:56.183497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:06:57.747 [2024-11-26 15:22:56.183512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:06:57.747 [2024-11-26 15:22:56.183641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.747 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 [2024-11-26 15:22:56.299115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 [2024-11-26 15:22:56.327032] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.007 [2024-11-26 15:22:56.327058] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2d73163d-bcb2-4d00-bdcb-132f2998e08c' was resized: old size 131072, new size 204800 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 [2024-11-26 15:22:56.338994] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:58.007 [2024-11-26 15:22:56.339018] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f97a18e6-27e1-465e-9619-1edc073beab6' was resized: old size 131072, new size 204800 00:06:58.007 [2024-11-26 15:22:56.339043] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 [2024-11-26 15:22:56.451125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.007 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.007 [2024-11-26 15:22:56.478970] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:58.266 [2024-11-26 15:22:56.479084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:58.266 [2024-11-26 15:22:56.479107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:58.266 [2024-11-26 15:22:56.479286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.266 [2024-11-26 15:22:56.479427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.266 [2024-11-26 15:22:56.479483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.266 [2024-11-26 15:22:56.479493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.266 [2024-11-26 15:22:56.490923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:58.266 [2024-11-26 15:22:56.491011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.266 [2024-11-26 15:22:56.491050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:58.266 [2024-11-26 15:22:56.491076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.266 [2024-11-26 15:22:56.493190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.266 [2024-11-26 15:22:56.493268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:58.266 [2024-11-26 15:22:56.494675] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2d73163d-bcb2-4d00-bdcb-132f2998e08c 00:06:58.266 [2024-11-26 15:22:56.494770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2d73163d-bcb2-4d00-bdcb-132f2998e08c is claimed 00:06:58.266 [2024-11-26 15:22:56.494893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f97a18e6-27e1-465e-9619-1edc073beab6 00:06:58.266 [2024-11-26 15:22:56.494946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f97a18e6-27e1-465e-9619-1edc073beab6 is claimed 00:06:58.266 [2024-11-26 15:22:56.495062] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f97a18e6-27e1-465e-9619-1edc073beab6 (2) smaller than existing raid bdev Raid (3) 00:06:58.266 [2024-11-26 15:22:56.495125] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2d73163d-bcb2-4d00-bdcb-132f2998e08c: File exists 00:06:58.266 [2024-11-26 15:22:56.495221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:58.266 [2024-11-26 15:22:56.495257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:58.266 [2024-11-26 15:22:56.495507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:06:58.266 [2024-11-26 15:22:56.495668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:58.266 [2024-11-26 15:22:56.495708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:58.266 [2024-11-26 15:22:56.495878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.266 pt0 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.266 [2024-11-26 15:22:56.519370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73332 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73332 ']' 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73332 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73332 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.266 killing process with pid 73332 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73332' 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73332 00:06:58.266 [2024-11-26 15:22:56.599695] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.266 [2024-11-26 15:22:56.599766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.266 [2024-11-26 15:22:56.599808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.266 [2024-11-26 15:22:56.599819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:58.266 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73332 00:06:58.525 [2024-11-26 15:22:56.757873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.525 15:22:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:58.525 00:06:58.525 real 0m1.934s 00:06:58.525 user 0m2.182s 00:06:58.525 sys 0m0.484s 00:06:58.525 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.525 15:22:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.525 ************************************ 00:06:58.525 END TEST raid1_resize_superblock_test 00:06:58.525 ************************************ 00:06:58.784 15:22:57 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:58.784 15:22:57 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:58.784 15:22:57 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:58.784 15:22:57 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:58.784 15:22:57 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:58.784 15:22:57 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:58.785 15:22:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.785 15:22:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.785 15:22:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.785 ************************************ 00:06:58.785 START TEST raid_function_test_raid0 00:06:58.785 ************************************ 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73402 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73402' 00:06:58.785 Process raid pid: 73402 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73402 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 73402 ']' 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.785 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:58.785 [2024-11-26 15:22:57.145955] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:58.785 [2024-11-26 15:22:57.146164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.053 [2024-11-26 15:22:57.281770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.053 [2024-11-26 15:22:57.322440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.053 [2024-11-26 15:22:57.348331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.053 [2024-11-26 15:22:57.391213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.053 [2024-11-26 15:22:57.391336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.634 Base_1 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.634 Base_2 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.634 [2024-11-26 15:22:57.995628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:59.634 [2024-11-26 15:22:57.997409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:59.634 [2024-11-26 15:22:57.997470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:06:59.634 [2024-11-26 15:22:57.997480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:59.634 [2024-11-26 15:22:57.997740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:06:59.634 [2024-11-26 15:22:57.997847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:06:59.634 [2024-11-26 15:22:57.997859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:06:59.634 [2024-11-26 15:22:57.997998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.634 15:22:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:59.634 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:59.894 [2024-11-26 15:22:58.235713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:06:59.894 /dev/nbd0 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.894 1+0 records in 00:06:59.894 1+0 records out 00:06:59.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270648 s, 15.1 MB/s 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:59.894 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:00.154 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.155 { 00:07:00.155 "nbd_device": "/dev/nbd0", 00:07:00.155 "bdev_name": "raid" 00:07:00.155 } 00:07:00.155 ]' 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.155 { 00:07:00.155 "nbd_device": "/dev/nbd0", 00:07:00.155 "bdev_name": "raid" 00:07:00.155 } 00:07:00.155 ]' 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:00.155 4096+0 records in 00:07:00.155 4096+0 records out 00:07:00.155 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0319034 s, 65.7 MB/s 00:07:00.155 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:00.415 4096+0 records in 00:07:00.415 4096+0 records out 00:07:00.415 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.180543 s, 11.6 MB/s 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:00.415 128+0 records in 00:07:00.415 128+0 records out 00:07:00.415 65536 bytes (66 kB, 64 KiB) copied, 0.00115058 s, 57.0 MB/s 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:00.415 2035+0 records in 00:07:00.415 2035+0 records out 00:07:00.415 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0127608 s, 81.7 MB/s 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:00.415 456+0 records in 00:07:00.415 456+0 records out 00:07:00.415 233472 bytes (233 kB, 228 KiB) copied, 0.00272889 s, 85.6 MB/s 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:00.415 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:00.416 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.416 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:00.416 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.416 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:00.416 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.416 15:22:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.676 [2024-11-26 15:22:59.080850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.676 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73402 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 73402 ']' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 73402 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73402 00:07:00.948 killing process with pid 73402 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73402' 00:07:00.948 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 73402 00:07:00.948 [2024-11-26 15:22:59.380091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.948 [2024-11-26 15:22:59.380199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.949 [2024-11-26 15:22:59.380256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.949 [2024-11-26 15:22:59.380267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:00.949 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 73402 00:07:00.949 [2024-11-26 15:22:59.402425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.213 15:22:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:01.213 00:07:01.213 real 0m2.547s 00:07:01.213 user 0m3.169s 00:07:01.213 sys 0m0.839s 00:07:01.213 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.213 15:22:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.213 ************************************ 00:07:01.213 END TEST raid_function_test_raid0 00:07:01.213 ************************************ 00:07:01.213 15:22:59 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:01.214 15:22:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.214 15:22:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.214 15:22:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.214 ************************************ 00:07:01.214 START TEST raid_function_test_concat 00:07:01.214 ************************************ 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73514 00:07:01.214 Process raid pid: 73514 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73514' 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73514 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 73514 ']' 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.214 15:22:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:01.473 [2024-11-26 15:22:59.763741] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:01.473 [2024-11-26 15:22:59.763882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.473 [2024-11-26 15:22:59.899311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.473 [2024-11-26 15:22:59.936034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.734 [2024-11-26 15:22:59.960612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.734 [2024-11-26 15:23:00.002723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.734 [2024-11-26 15:23:00.002760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 Base_1 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 Base_2 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 [2024-11-26 15:23:00.606565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:02.304 [2024-11-26 15:23:00.608359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:02.304 [2024-11-26 15:23:00.608431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:02.304 [2024-11-26 15:23:00.608443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:02.304 [2024-11-26 15:23:00.608711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:02.304 [2024-11-26 15:23:00.608844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:02.304 [2024-11-26 15:23:00.608864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:02.304 [2024-11-26 15:23:00.608990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:02.304 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:02.564 [2024-11-26 15:23:00.842651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:02.564 /dev/nbd0 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.564 1+0 records in 00:07:02.564 1+0 records out 00:07:02.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428722 s, 9.6 MB/s 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.564 15:23:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.825 { 00:07:02.825 "nbd_device": "/dev/nbd0", 00:07:02.825 "bdev_name": "raid" 00:07:02.825 } 00:07:02.825 ]' 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.825 { 00:07:02.825 "nbd_device": "/dev/nbd0", 00:07:02.825 "bdev_name": "raid" 00:07:02.825 } 00:07:02.825 ]' 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:02.825 4096+0 records in 00:07:02.825 4096+0 records out 00:07:02.825 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0336045 s, 62.4 MB/s 00:07:02.825 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:03.085 4096+0 records in 00:07:03.085 4096+0 records out 00:07:03.085 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.173466 s, 12.1 MB/s 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:03.085 128+0 records in 00:07:03.085 128+0 records out 00:07:03.085 65536 bytes (66 kB, 64 KiB) copied, 0.00115165 s, 56.9 MB/s 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:03.085 2035+0 records in 00:07:03.085 2035+0 records out 00:07:03.085 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0142645 s, 73.0 MB/s 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:03.085 456+0 records in 00:07:03.085 456+0 records out 00:07:03.085 233472 bytes (233 kB, 228 KiB) copied, 0.00338705 s, 68.9 MB/s 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.085 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.086 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.346 [2024-11-26 15:23:01.680656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.346 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73514 00:07:03.606 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 73514 ']' 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 73514 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73514 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.607 killing process with pid 73514 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73514' 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 73514 00:07:03.607 [2024-11-26 15:23:01.967728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.607 [2024-11-26 15:23:01.967821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.607 15:23:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 73514 00:07:03.607 [2024-11-26 15:23:01.967892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.607 [2024-11-26 15:23:01.967910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:03.607 [2024-11-26 15:23:01.990176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.867 15:23:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:03.867 00:07:03.867 real 0m2.513s 00:07:03.867 user 0m3.102s 00:07:03.867 sys 0m0.850s 00:07:03.867 15:23:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.867 15:23:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:03.867 ************************************ 00:07:03.867 END TEST raid_function_test_concat 00:07:03.867 ************************************ 00:07:03.867 15:23:02 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:03.867 15:23:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.867 15:23:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.867 15:23:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.867 ************************************ 00:07:03.867 START TEST raid0_resize_test 00:07:03.867 ************************************ 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73626 00:07:03.867 Process raid pid: 73626 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73626' 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73626 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73626 ']' 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.867 15:23:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.128 [2024-11-26 15:23:02.347299] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:04.128 [2024-11-26 15:23:02.347446] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.128 [2024-11-26 15:23:02.483726] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.128 [2024-11-26 15:23:02.523031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.128 [2024-11-26 15:23:02.547422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.128 [2024-11-26 15:23:02.589626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.128 [2024-11-26 15:23:02.589669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.698 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.698 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:04.698 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:04.698 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.698 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.959 Base_1 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.959 Base_2 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.959 [2024-11-26 15:23:03.196072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.959 [2024-11-26 15:23:03.197788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.959 [2024-11-26 15:23:03.197848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:04.959 [2024-11-26 15:23:03.197857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:04.959 [2024-11-26 15:23:03.198110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:04.959 [2024-11-26 15:23:03.198230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:04.959 [2024-11-26 15:23:03.198250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:04.959 [2024-11-26 15:23:03.198370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.959 [2024-11-26 15:23:03.208037] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.959 [2024-11-26 15:23:03.208062] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:04.959 true 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.959 [2024-11-26 15:23:03.224246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.959 [2024-11-26 15:23:03.264049] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.959 [2024-11-26 15:23:03.264078] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:04.959 [2024-11-26 15:23:03.264095] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:04.959 true 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:04.959 [2024-11-26 15:23:03.276230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73626 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73626 ']' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 73626 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73626 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.959 killing process with pid 73626 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73626' 00:07:04.959 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 73626 00:07:04.959 [2024-11-26 15:23:03.361218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.959 [2024-11-26 15:23:03.361296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.959 [2024-11-26 15:23:03.361336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.960 [2024-11-26 15:23:03.361346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:04.960 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 73626 00:07:04.960 [2024-11-26 15:23:03.362812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.220 15:23:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:05.220 00:07:05.220 real 0m1.312s 00:07:05.220 user 0m1.466s 00:07:05.220 sys 0m0.298s 00:07:05.220 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.220 15:23:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 ************************************ 00:07:05.220 END TEST raid0_resize_test 00:07:05.220 ************************************ 00:07:05.220 15:23:03 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:05.220 15:23:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.220 15:23:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.220 15:23:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 ************************************ 00:07:05.220 START TEST raid1_resize_test 00:07:05.220 ************************************ 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73676 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.220 Process raid pid: 73676 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73676' 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73676 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73676 ']' 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.220 15:23:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.480 [2024-11-26 15:23:03.728892] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:05.480 [2024-11-26 15:23:03.729019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.480 [2024-11-26 15:23:03.864006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.480 [2024-11-26 15:23:03.900447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.480 [2024-11-26 15:23:03.925388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.740 [2024-11-26 15:23:03.967504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.740 [2024-11-26 15:23:03.967543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 Base_1 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 Base_2 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 [2024-11-26 15:23:04.573898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:06.311 [2024-11-26 15:23:04.575678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:06.311 [2024-11-26 15:23:04.575743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:06.311 [2024-11-26 15:23:04.575753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:06.311 [2024-11-26 15:23:04.576015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:06.311 [2024-11-26 15:23:04.576120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:06.311 [2024-11-26 15:23:04.576142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:06.311 [2024-11-26 15:23:04.576271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 [2024-11-26 15:23:04.581859] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.311 [2024-11-26 15:23:04.581883] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:06.311 true 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 [2024-11-26 15:23:04.594042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 [2024-11-26 15:23:04.641875] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.311 [2024-11-26 15:23:04.641902] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:06.311 [2024-11-26 15:23:04.641922] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:06.311 true 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:06.311 [2024-11-26 15:23:04.654059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73676 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73676 ']' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 73676 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73676 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.311 killing process with pid 73676 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73676' 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 73676 00:07:06.311 [2024-11-26 15:23:04.739885] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.311 [2024-11-26 15:23:04.739984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.311 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 73676 00:07:06.311 [2024-11-26 15:23:04.740421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.311 [2024-11-26 15:23:04.740444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:06.311 [2024-11-26 15:23:04.741567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.571 15:23:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:06.571 00:07:06.571 real 0m1.310s 00:07:06.571 user 0m1.467s 00:07:06.571 sys 0m0.306s 00:07:06.571 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.572 15:23:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.572 ************************************ 00:07:06.572 END TEST raid1_resize_test 00:07:06.572 ************************************ 00:07:06.572 15:23:05 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:06.572 15:23:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:06.572 15:23:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:06.572 15:23:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:06.572 15:23:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.572 15:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.572 ************************************ 00:07:06.572 START TEST raid_state_function_test 00:07:06.572 ************************************ 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73728 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73728' 00:07:06.572 Process raid pid: 73728 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73728 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73728 ']' 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.572 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.832 [2024-11-26 15:23:05.113386] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:06.832 [2024-11-26 15:23:05.113500] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.832 [2024-11-26 15:23:05.247651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.832 [2024-11-26 15:23:05.286763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.092 [2024-11-26 15:23:05.311472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.092 [2024-11-26 15:23:05.353434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.092 [2024-11-26 15:23:05.353476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.665 [2024-11-26 15:23:05.927892] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.665 [2024-11-26 15:23:05.927937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.665 [2024-11-26 15:23:05.927955] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.665 [2024-11-26 15:23:05.927963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.665 "name": "Existed_Raid", 00:07:07.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.665 "strip_size_kb": 64, 00:07:07.665 "state": "configuring", 00:07:07.665 "raid_level": "raid0", 00:07:07.665 "superblock": false, 00:07:07.665 "num_base_bdevs": 2, 00:07:07.665 "num_base_bdevs_discovered": 0, 00:07:07.665 "num_base_bdevs_operational": 2, 00:07:07.665 "base_bdevs_list": [ 00:07:07.665 { 00:07:07.665 "name": "BaseBdev1", 00:07:07.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.665 "is_configured": false, 00:07:07.665 "data_offset": 0, 00:07:07.665 "data_size": 0 00:07:07.665 }, 00:07:07.665 { 00:07:07.665 "name": "BaseBdev2", 00:07:07.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.665 "is_configured": false, 00:07:07.665 "data_offset": 0, 00:07:07.665 "data_size": 0 00:07:07.665 } 00:07:07.665 ] 00:07:07.665 }' 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.665 15:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.925 [2024-11-26 15:23:06.359906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.925 [2024-11-26 15:23:06.359945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.925 [2024-11-26 15:23:06.371940] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.925 [2024-11-26 15:23:06.371972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.925 [2024-11-26 15:23:06.371983] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.925 [2024-11-26 15:23:06.371992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.925 [2024-11-26 15:23:06.392708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.925 BaseBdev1 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.925 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.186 [ 00:07:08.186 { 00:07:08.186 "name": "BaseBdev1", 00:07:08.186 "aliases": [ 00:07:08.186 "6b856d1b-f7fc-47c6-a49f-4fae5d26e229" 00:07:08.186 ], 00:07:08.186 "product_name": "Malloc disk", 00:07:08.186 "block_size": 512, 00:07:08.186 "num_blocks": 65536, 00:07:08.186 "uuid": "6b856d1b-f7fc-47c6-a49f-4fae5d26e229", 00:07:08.186 "assigned_rate_limits": { 00:07:08.186 "rw_ios_per_sec": 0, 00:07:08.186 "rw_mbytes_per_sec": 0, 00:07:08.186 "r_mbytes_per_sec": 0, 00:07:08.186 "w_mbytes_per_sec": 0 00:07:08.186 }, 00:07:08.186 "claimed": true, 00:07:08.186 "claim_type": "exclusive_write", 00:07:08.186 "zoned": false, 00:07:08.186 "supported_io_types": { 00:07:08.186 "read": true, 00:07:08.186 "write": true, 00:07:08.186 "unmap": true, 00:07:08.186 "flush": true, 00:07:08.186 "reset": true, 00:07:08.186 "nvme_admin": false, 00:07:08.186 "nvme_io": false, 00:07:08.186 "nvme_io_md": false, 00:07:08.186 "write_zeroes": true, 00:07:08.186 "zcopy": true, 00:07:08.186 "get_zone_info": false, 00:07:08.186 "zone_management": false, 00:07:08.186 "zone_append": false, 00:07:08.186 "compare": false, 00:07:08.186 "compare_and_write": false, 00:07:08.186 "abort": true, 00:07:08.186 "seek_hole": false, 00:07:08.186 "seek_data": false, 00:07:08.186 "copy": true, 00:07:08.186 "nvme_iov_md": false 00:07:08.186 }, 00:07:08.186 "memory_domains": [ 00:07:08.186 { 00:07:08.186 "dma_device_id": "system", 00:07:08.186 "dma_device_type": 1 00:07:08.186 }, 00:07:08.186 { 00:07:08.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.186 "dma_device_type": 2 00:07:08.186 } 00:07:08.186 ], 00:07:08.186 "driver_specific": {} 00:07:08.186 } 00:07:08.186 ] 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.186 "name": "Existed_Raid", 00:07:08.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.186 "strip_size_kb": 64, 00:07:08.186 "state": "configuring", 00:07:08.186 "raid_level": "raid0", 00:07:08.186 "superblock": false, 00:07:08.186 "num_base_bdevs": 2, 00:07:08.186 "num_base_bdevs_discovered": 1, 00:07:08.186 "num_base_bdevs_operational": 2, 00:07:08.186 "base_bdevs_list": [ 00:07:08.186 { 00:07:08.186 "name": "BaseBdev1", 00:07:08.186 "uuid": "6b856d1b-f7fc-47c6-a49f-4fae5d26e229", 00:07:08.186 "is_configured": true, 00:07:08.186 "data_offset": 0, 00:07:08.186 "data_size": 65536 00:07:08.186 }, 00:07:08.186 { 00:07:08.186 "name": "BaseBdev2", 00:07:08.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.186 "is_configured": false, 00:07:08.186 "data_offset": 0, 00:07:08.186 "data_size": 0 00:07:08.186 } 00:07:08.186 ] 00:07:08.186 }' 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.186 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.447 [2024-11-26 15:23:06.844866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.447 [2024-11-26 15:23:06.844925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.447 [2024-11-26 15:23:06.856944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.447 [2024-11-26 15:23:06.858736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.447 [2024-11-26 15:23:06.858768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.447 "name": "Existed_Raid", 00:07:08.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.447 "strip_size_kb": 64, 00:07:08.447 "state": "configuring", 00:07:08.447 "raid_level": "raid0", 00:07:08.447 "superblock": false, 00:07:08.447 "num_base_bdevs": 2, 00:07:08.447 "num_base_bdevs_discovered": 1, 00:07:08.447 "num_base_bdevs_operational": 2, 00:07:08.447 "base_bdevs_list": [ 00:07:08.447 { 00:07:08.447 "name": "BaseBdev1", 00:07:08.447 "uuid": "6b856d1b-f7fc-47c6-a49f-4fae5d26e229", 00:07:08.447 "is_configured": true, 00:07:08.447 "data_offset": 0, 00:07:08.447 "data_size": 65536 00:07:08.447 }, 00:07:08.447 { 00:07:08.447 "name": "BaseBdev2", 00:07:08.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.447 "is_configured": false, 00:07:08.447 "data_offset": 0, 00:07:08.447 "data_size": 0 00:07:08.447 } 00:07:08.447 ] 00:07:08.447 }' 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.447 15:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.030 [2024-11-26 15:23:07.287976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.030 [2024-11-26 15:23:07.288014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:09.030 [2024-11-26 15:23:07.288043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.030 [2024-11-26 15:23:07.288300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:09.030 [2024-11-26 15:23:07.288445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:09.030 [2024-11-26 15:23:07.288463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:09.030 [2024-11-26 15:23:07.288668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.030 BaseBdev2 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.030 [ 00:07:09.030 { 00:07:09.030 "name": "BaseBdev2", 00:07:09.030 "aliases": [ 00:07:09.030 "e8205951-dc50-439b-9320-81b599b4f929" 00:07:09.030 ], 00:07:09.030 "product_name": "Malloc disk", 00:07:09.030 "block_size": 512, 00:07:09.030 "num_blocks": 65536, 00:07:09.030 "uuid": "e8205951-dc50-439b-9320-81b599b4f929", 00:07:09.030 "assigned_rate_limits": { 00:07:09.030 "rw_ios_per_sec": 0, 00:07:09.030 "rw_mbytes_per_sec": 0, 00:07:09.030 "r_mbytes_per_sec": 0, 00:07:09.030 "w_mbytes_per_sec": 0 00:07:09.030 }, 00:07:09.030 "claimed": true, 00:07:09.030 "claim_type": "exclusive_write", 00:07:09.030 "zoned": false, 00:07:09.030 "supported_io_types": { 00:07:09.030 "read": true, 00:07:09.030 "write": true, 00:07:09.030 "unmap": true, 00:07:09.030 "flush": true, 00:07:09.030 "reset": true, 00:07:09.030 "nvme_admin": false, 00:07:09.030 "nvme_io": false, 00:07:09.030 "nvme_io_md": false, 00:07:09.030 "write_zeroes": true, 00:07:09.030 "zcopy": true, 00:07:09.030 "get_zone_info": false, 00:07:09.030 "zone_management": false, 00:07:09.030 "zone_append": false, 00:07:09.030 "compare": false, 00:07:09.030 "compare_and_write": false, 00:07:09.030 "abort": true, 00:07:09.030 "seek_hole": false, 00:07:09.030 "seek_data": false, 00:07:09.030 "copy": true, 00:07:09.030 "nvme_iov_md": false 00:07:09.030 }, 00:07:09.030 "memory_domains": [ 00:07:09.030 { 00:07:09.030 "dma_device_id": "system", 00:07:09.030 "dma_device_type": 1 00:07:09.030 }, 00:07:09.030 { 00:07:09.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.030 "dma_device_type": 2 00:07:09.030 } 00:07:09.030 ], 00:07:09.030 "driver_specific": {} 00:07:09.030 } 00:07:09.030 ] 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.030 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.030 "name": "Existed_Raid", 00:07:09.030 "uuid": "62c25a45-9385-4571-9508-6f781c0c37b6", 00:07:09.030 "strip_size_kb": 64, 00:07:09.030 "state": "online", 00:07:09.030 "raid_level": "raid0", 00:07:09.030 "superblock": false, 00:07:09.030 "num_base_bdevs": 2, 00:07:09.030 "num_base_bdevs_discovered": 2, 00:07:09.030 "num_base_bdevs_operational": 2, 00:07:09.030 "base_bdevs_list": [ 00:07:09.030 { 00:07:09.030 "name": "BaseBdev1", 00:07:09.030 "uuid": "6b856d1b-f7fc-47c6-a49f-4fae5d26e229", 00:07:09.030 "is_configured": true, 00:07:09.030 "data_offset": 0, 00:07:09.030 "data_size": 65536 00:07:09.030 }, 00:07:09.030 { 00:07:09.030 "name": "BaseBdev2", 00:07:09.030 "uuid": "e8205951-dc50-439b-9320-81b599b4f929", 00:07:09.030 "is_configured": true, 00:07:09.030 "data_offset": 0, 00:07:09.030 "data_size": 65536 00:07:09.030 } 00:07:09.030 ] 00:07:09.030 }' 00:07:09.031 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.031 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.304 [2024-11-26 15:23:07.744419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.304 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.565 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.565 "name": "Existed_Raid", 00:07:09.565 "aliases": [ 00:07:09.565 "62c25a45-9385-4571-9508-6f781c0c37b6" 00:07:09.565 ], 00:07:09.565 "product_name": "Raid Volume", 00:07:09.565 "block_size": 512, 00:07:09.565 "num_blocks": 131072, 00:07:09.565 "uuid": "62c25a45-9385-4571-9508-6f781c0c37b6", 00:07:09.565 "assigned_rate_limits": { 00:07:09.565 "rw_ios_per_sec": 0, 00:07:09.565 "rw_mbytes_per_sec": 0, 00:07:09.565 "r_mbytes_per_sec": 0, 00:07:09.565 "w_mbytes_per_sec": 0 00:07:09.565 }, 00:07:09.565 "claimed": false, 00:07:09.565 "zoned": false, 00:07:09.565 "supported_io_types": { 00:07:09.565 "read": true, 00:07:09.565 "write": true, 00:07:09.565 "unmap": true, 00:07:09.565 "flush": true, 00:07:09.565 "reset": true, 00:07:09.565 "nvme_admin": false, 00:07:09.565 "nvme_io": false, 00:07:09.565 "nvme_io_md": false, 00:07:09.565 "write_zeroes": true, 00:07:09.565 "zcopy": false, 00:07:09.565 "get_zone_info": false, 00:07:09.565 "zone_management": false, 00:07:09.565 "zone_append": false, 00:07:09.565 "compare": false, 00:07:09.565 "compare_and_write": false, 00:07:09.565 "abort": false, 00:07:09.565 "seek_hole": false, 00:07:09.565 "seek_data": false, 00:07:09.565 "copy": false, 00:07:09.565 "nvme_iov_md": false 00:07:09.565 }, 00:07:09.565 "memory_domains": [ 00:07:09.565 { 00:07:09.565 "dma_device_id": "system", 00:07:09.565 "dma_device_type": 1 00:07:09.565 }, 00:07:09.565 { 00:07:09.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.565 "dma_device_type": 2 00:07:09.565 }, 00:07:09.565 { 00:07:09.565 "dma_device_id": "system", 00:07:09.565 "dma_device_type": 1 00:07:09.565 }, 00:07:09.565 { 00:07:09.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.565 "dma_device_type": 2 00:07:09.565 } 00:07:09.565 ], 00:07:09.565 "driver_specific": { 00:07:09.565 "raid": { 00:07:09.565 "uuid": "62c25a45-9385-4571-9508-6f781c0c37b6", 00:07:09.565 "strip_size_kb": 64, 00:07:09.565 "state": "online", 00:07:09.565 "raid_level": "raid0", 00:07:09.565 "superblock": false, 00:07:09.565 "num_base_bdevs": 2, 00:07:09.565 "num_base_bdevs_discovered": 2, 00:07:09.565 "num_base_bdevs_operational": 2, 00:07:09.565 "base_bdevs_list": [ 00:07:09.565 { 00:07:09.565 "name": "BaseBdev1", 00:07:09.565 "uuid": "6b856d1b-f7fc-47c6-a49f-4fae5d26e229", 00:07:09.565 "is_configured": true, 00:07:09.565 "data_offset": 0, 00:07:09.565 "data_size": 65536 00:07:09.565 }, 00:07:09.565 { 00:07:09.565 "name": "BaseBdev2", 00:07:09.565 "uuid": "e8205951-dc50-439b-9320-81b599b4f929", 00:07:09.565 "is_configured": true, 00:07:09.565 "data_offset": 0, 00:07:09.565 "data_size": 65536 00:07:09.565 } 00:07:09.565 ] 00:07:09.565 } 00:07:09.565 } 00:07:09.565 }' 00:07:09.565 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.565 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:09.565 BaseBdev2' 00:07:09.565 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.565 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.565 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.565 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.566 [2024-11-26 15:23:07.976290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:09.566 [2024-11-26 15:23:07.976319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:09.566 [2024-11-26 15:23:07.976367] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.566 15:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.566 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.826 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.826 "name": "Existed_Raid", 00:07:09.826 "uuid": "62c25a45-9385-4571-9508-6f781c0c37b6", 00:07:09.826 "strip_size_kb": 64, 00:07:09.826 "state": "offline", 00:07:09.826 "raid_level": "raid0", 00:07:09.826 "superblock": false, 00:07:09.826 "num_base_bdevs": 2, 00:07:09.826 "num_base_bdevs_discovered": 1, 00:07:09.826 "num_base_bdevs_operational": 1, 00:07:09.826 "base_bdevs_list": [ 00:07:09.826 { 00:07:09.826 "name": null, 00:07:09.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.826 "is_configured": false, 00:07:09.826 "data_offset": 0, 00:07:09.826 "data_size": 65536 00:07:09.826 }, 00:07:09.826 { 00:07:09.826 "name": "BaseBdev2", 00:07:09.826 "uuid": "e8205951-dc50-439b-9320-81b599b4f929", 00:07:09.826 "is_configured": true, 00:07:09.826 "data_offset": 0, 00:07:09.826 "data_size": 65536 00:07:09.826 } 00:07:09.826 ] 00:07:09.826 }' 00:07:09.826 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.826 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 [2024-11-26 15:23:08.451470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:10.087 [2024-11-26 15:23:08.451522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73728 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73728 ']' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73728 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73728 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.087 killing process with pid 73728 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73728' 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73728 00:07:10.087 [2024-11-26 15:23:08.554304] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.087 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73728 00:07:10.087 [2024-11-26 15:23:08.555258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.347 15:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:10.347 00:07:10.347 real 0m3.738s 00:07:10.347 user 0m5.957s 00:07:10.347 sys 0m0.691s 00:07:10.347 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.347 15:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.347 ************************************ 00:07:10.347 END TEST raid_state_function_test 00:07:10.347 ************************************ 00:07:10.347 15:23:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:10.347 15:23:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:10.347 15:23:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.347 15:23:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.607 ************************************ 00:07:10.607 START TEST raid_state_function_test_sb 00:07:10.607 ************************************ 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:10.607 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73964 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.608 Process raid pid: 73964 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73964' 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73964 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73964 ']' 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.608 15:23:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.608 [2024-11-26 15:23:08.947623] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:10.608 [2024-11-26 15:23:08.947825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.868 [2024-11-26 15:23:09.091385] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.868 [2024-11-26 15:23:09.128095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.868 [2024-11-26 15:23:09.152881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.868 [2024-11-26 15:23:09.195265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.868 [2024-11-26 15:23:09.195305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.441 [2024-11-26 15:23:09.785697] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.441 [2024-11-26 15:23:09.785841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.441 [2024-11-26 15:23:09.785865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.441 [2024-11-26 15:23:09.785887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.441 "name": "Existed_Raid", 00:07:11.441 "uuid": "4e9d5528-5a7e-4631-8d6c-432aee3ac431", 00:07:11.441 "strip_size_kb": 64, 00:07:11.441 "state": "configuring", 00:07:11.441 "raid_level": "raid0", 00:07:11.441 "superblock": true, 00:07:11.441 "num_base_bdevs": 2, 00:07:11.441 "num_base_bdevs_discovered": 0, 00:07:11.441 "num_base_bdevs_operational": 2, 00:07:11.441 "base_bdevs_list": [ 00:07:11.441 { 00:07:11.441 "name": "BaseBdev1", 00:07:11.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.441 "is_configured": false, 00:07:11.441 "data_offset": 0, 00:07:11.441 "data_size": 0 00:07:11.441 }, 00:07:11.441 { 00:07:11.441 "name": "BaseBdev2", 00:07:11.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.441 "is_configured": false, 00:07:11.441 "data_offset": 0, 00:07:11.441 "data_size": 0 00:07:11.441 } 00:07:11.441 ] 00:07:11.441 }' 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.441 15:23:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.012 [2024-11-26 15:23:10.201695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.012 [2024-11-26 15:23:10.201781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.012 [2024-11-26 15:23:10.213733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.012 [2024-11-26 15:23:10.213772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.012 [2024-11-26 15:23:10.213783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.012 [2024-11-26 15:23:10.213792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.012 [2024-11-26 15:23:10.234575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.012 BaseBdev1 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.012 [ 00:07:12.012 { 00:07:12.012 "name": "BaseBdev1", 00:07:12.012 "aliases": [ 00:07:12.012 "7abc9b57-3f2d-44cc-91f7-d81fd6caad2c" 00:07:12.012 ], 00:07:12.012 "product_name": "Malloc disk", 00:07:12.012 "block_size": 512, 00:07:12.012 "num_blocks": 65536, 00:07:12.012 "uuid": "7abc9b57-3f2d-44cc-91f7-d81fd6caad2c", 00:07:12.012 "assigned_rate_limits": { 00:07:12.012 "rw_ios_per_sec": 0, 00:07:12.012 "rw_mbytes_per_sec": 0, 00:07:12.012 "r_mbytes_per_sec": 0, 00:07:12.012 "w_mbytes_per_sec": 0 00:07:12.012 }, 00:07:12.012 "claimed": true, 00:07:12.012 "claim_type": "exclusive_write", 00:07:12.012 "zoned": false, 00:07:12.012 "supported_io_types": { 00:07:12.012 "read": true, 00:07:12.012 "write": true, 00:07:12.012 "unmap": true, 00:07:12.012 "flush": true, 00:07:12.012 "reset": true, 00:07:12.012 "nvme_admin": false, 00:07:12.012 "nvme_io": false, 00:07:12.012 "nvme_io_md": false, 00:07:12.012 "write_zeroes": true, 00:07:12.012 "zcopy": true, 00:07:12.012 "get_zone_info": false, 00:07:12.012 "zone_management": false, 00:07:12.012 "zone_append": false, 00:07:12.012 "compare": false, 00:07:12.012 "compare_and_write": false, 00:07:12.012 "abort": true, 00:07:12.012 "seek_hole": false, 00:07:12.012 "seek_data": false, 00:07:12.012 "copy": true, 00:07:12.012 "nvme_iov_md": false 00:07:12.012 }, 00:07:12.012 "memory_domains": [ 00:07:12.012 { 00:07:12.012 "dma_device_id": "system", 00:07:12.012 "dma_device_type": 1 00:07:12.012 }, 00:07:12.012 { 00:07:12.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.012 "dma_device_type": 2 00:07:12.012 } 00:07:12.012 ], 00:07:12.012 "driver_specific": {} 00:07:12.012 } 00:07:12.012 ] 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.012 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.013 "name": "Existed_Raid", 00:07:12.013 "uuid": "597e50d2-f23f-4633-82e4-62edb218c63a", 00:07:12.013 "strip_size_kb": 64, 00:07:12.013 "state": "configuring", 00:07:12.013 "raid_level": "raid0", 00:07:12.013 "superblock": true, 00:07:12.013 "num_base_bdevs": 2, 00:07:12.013 "num_base_bdevs_discovered": 1, 00:07:12.013 "num_base_bdevs_operational": 2, 00:07:12.013 "base_bdevs_list": [ 00:07:12.013 { 00:07:12.013 "name": "BaseBdev1", 00:07:12.013 "uuid": "7abc9b57-3f2d-44cc-91f7-d81fd6caad2c", 00:07:12.013 "is_configured": true, 00:07:12.013 "data_offset": 2048, 00:07:12.013 "data_size": 63488 00:07:12.013 }, 00:07:12.013 { 00:07:12.013 "name": "BaseBdev2", 00:07:12.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.013 "is_configured": false, 00:07:12.013 "data_offset": 0, 00:07:12.013 "data_size": 0 00:07:12.013 } 00:07:12.013 ] 00:07:12.013 }' 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.013 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.273 [2024-11-26 15:23:10.686704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.273 [2024-11-26 15:23:10.686801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.273 [2024-11-26 15:23:10.698755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.273 [2024-11-26 15:23:10.700587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.273 [2024-11-26 15:23:10.700623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.273 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.274 "name": "Existed_Raid", 00:07:12.274 "uuid": "2101416f-0f40-48ac-890d-425a72d06ea8", 00:07:12.274 "strip_size_kb": 64, 00:07:12.274 "state": "configuring", 00:07:12.274 "raid_level": "raid0", 00:07:12.274 "superblock": true, 00:07:12.274 "num_base_bdevs": 2, 00:07:12.274 "num_base_bdevs_discovered": 1, 00:07:12.274 "num_base_bdevs_operational": 2, 00:07:12.274 "base_bdevs_list": [ 00:07:12.274 { 00:07:12.274 "name": "BaseBdev1", 00:07:12.274 "uuid": "7abc9b57-3f2d-44cc-91f7-d81fd6caad2c", 00:07:12.274 "is_configured": true, 00:07:12.274 "data_offset": 2048, 00:07:12.274 "data_size": 63488 00:07:12.274 }, 00:07:12.274 { 00:07:12.274 "name": "BaseBdev2", 00:07:12.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.274 "is_configured": false, 00:07:12.274 "data_offset": 0, 00:07:12.274 "data_size": 0 00:07:12.274 } 00:07:12.274 ] 00:07:12.274 }' 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.274 15:23:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.845 [2024-11-26 15:23:11.053863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.845 [2024-11-26 15:23:11.054140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:12.845 [2024-11-26 15:23:11.054212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.845 [2024-11-26 15:23:11.054505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:12.845 BaseBdev2 00:07:12.845 [2024-11-26 15:23:11.054685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:12.845 [2024-11-26 15:23:11.054697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:12.845 [2024-11-26 15:23:11.054809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.845 [ 00:07:12.845 { 00:07:12.845 "name": "BaseBdev2", 00:07:12.845 "aliases": [ 00:07:12.845 "bd51a5ad-d930-4bb0-8511-528f32a1e1d0" 00:07:12.845 ], 00:07:12.845 "product_name": "Malloc disk", 00:07:12.845 "block_size": 512, 00:07:12.845 "num_blocks": 65536, 00:07:12.845 "uuid": "bd51a5ad-d930-4bb0-8511-528f32a1e1d0", 00:07:12.845 "assigned_rate_limits": { 00:07:12.845 "rw_ios_per_sec": 0, 00:07:12.845 "rw_mbytes_per_sec": 0, 00:07:12.845 "r_mbytes_per_sec": 0, 00:07:12.845 "w_mbytes_per_sec": 0 00:07:12.845 }, 00:07:12.845 "claimed": true, 00:07:12.845 "claim_type": "exclusive_write", 00:07:12.845 "zoned": false, 00:07:12.845 "supported_io_types": { 00:07:12.845 "read": true, 00:07:12.845 "write": true, 00:07:12.845 "unmap": true, 00:07:12.845 "flush": true, 00:07:12.845 "reset": true, 00:07:12.845 "nvme_admin": false, 00:07:12.845 "nvme_io": false, 00:07:12.845 "nvme_io_md": false, 00:07:12.845 "write_zeroes": true, 00:07:12.845 "zcopy": true, 00:07:12.845 "get_zone_info": false, 00:07:12.845 "zone_management": false, 00:07:12.845 "zone_append": false, 00:07:12.845 "compare": false, 00:07:12.845 "compare_and_write": false, 00:07:12.845 "abort": true, 00:07:12.845 "seek_hole": false, 00:07:12.845 "seek_data": false, 00:07:12.845 "copy": true, 00:07:12.845 "nvme_iov_md": false 00:07:12.845 }, 00:07:12.845 "memory_domains": [ 00:07:12.845 { 00:07:12.845 "dma_device_id": "system", 00:07:12.845 "dma_device_type": 1 00:07:12.845 }, 00:07:12.845 { 00:07:12.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.845 "dma_device_type": 2 00:07:12.845 } 00:07:12.845 ], 00:07:12.845 "driver_specific": {} 00:07:12.845 } 00:07:12.845 ] 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.845 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.845 "name": "Existed_Raid", 00:07:12.845 "uuid": "2101416f-0f40-48ac-890d-425a72d06ea8", 00:07:12.845 "strip_size_kb": 64, 00:07:12.845 "state": "online", 00:07:12.845 "raid_level": "raid0", 00:07:12.845 "superblock": true, 00:07:12.845 "num_base_bdevs": 2, 00:07:12.845 "num_base_bdevs_discovered": 2, 00:07:12.846 "num_base_bdevs_operational": 2, 00:07:12.846 "base_bdevs_list": [ 00:07:12.846 { 00:07:12.846 "name": "BaseBdev1", 00:07:12.846 "uuid": "7abc9b57-3f2d-44cc-91f7-d81fd6caad2c", 00:07:12.846 "is_configured": true, 00:07:12.846 "data_offset": 2048, 00:07:12.846 "data_size": 63488 00:07:12.846 }, 00:07:12.846 { 00:07:12.846 "name": "BaseBdev2", 00:07:12.846 "uuid": "bd51a5ad-d930-4bb0-8511-528f32a1e1d0", 00:07:12.846 "is_configured": true, 00:07:12.846 "data_offset": 2048, 00:07:12.846 "data_size": 63488 00:07:12.846 } 00:07:12.846 ] 00:07:12.846 }' 00:07:12.846 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.846 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.106 [2024-11-26 15:23:11.506334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.106 "name": "Existed_Raid", 00:07:13.106 "aliases": [ 00:07:13.106 "2101416f-0f40-48ac-890d-425a72d06ea8" 00:07:13.106 ], 00:07:13.106 "product_name": "Raid Volume", 00:07:13.106 "block_size": 512, 00:07:13.106 "num_blocks": 126976, 00:07:13.106 "uuid": "2101416f-0f40-48ac-890d-425a72d06ea8", 00:07:13.106 "assigned_rate_limits": { 00:07:13.106 "rw_ios_per_sec": 0, 00:07:13.106 "rw_mbytes_per_sec": 0, 00:07:13.106 "r_mbytes_per_sec": 0, 00:07:13.106 "w_mbytes_per_sec": 0 00:07:13.106 }, 00:07:13.106 "claimed": false, 00:07:13.106 "zoned": false, 00:07:13.106 "supported_io_types": { 00:07:13.106 "read": true, 00:07:13.106 "write": true, 00:07:13.106 "unmap": true, 00:07:13.106 "flush": true, 00:07:13.106 "reset": true, 00:07:13.106 "nvme_admin": false, 00:07:13.106 "nvme_io": false, 00:07:13.106 "nvme_io_md": false, 00:07:13.106 "write_zeroes": true, 00:07:13.106 "zcopy": false, 00:07:13.106 "get_zone_info": false, 00:07:13.106 "zone_management": false, 00:07:13.106 "zone_append": false, 00:07:13.106 "compare": false, 00:07:13.106 "compare_and_write": false, 00:07:13.106 "abort": false, 00:07:13.106 "seek_hole": false, 00:07:13.106 "seek_data": false, 00:07:13.106 "copy": false, 00:07:13.106 "nvme_iov_md": false 00:07:13.106 }, 00:07:13.106 "memory_domains": [ 00:07:13.106 { 00:07:13.106 "dma_device_id": "system", 00:07:13.106 "dma_device_type": 1 00:07:13.106 }, 00:07:13.106 { 00:07:13.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.106 "dma_device_type": 2 00:07:13.106 }, 00:07:13.106 { 00:07:13.106 "dma_device_id": "system", 00:07:13.106 "dma_device_type": 1 00:07:13.106 }, 00:07:13.106 { 00:07:13.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.106 "dma_device_type": 2 00:07:13.106 } 00:07:13.106 ], 00:07:13.106 "driver_specific": { 00:07:13.106 "raid": { 00:07:13.106 "uuid": "2101416f-0f40-48ac-890d-425a72d06ea8", 00:07:13.106 "strip_size_kb": 64, 00:07:13.106 "state": "online", 00:07:13.106 "raid_level": "raid0", 00:07:13.106 "superblock": true, 00:07:13.106 "num_base_bdevs": 2, 00:07:13.106 "num_base_bdevs_discovered": 2, 00:07:13.106 "num_base_bdevs_operational": 2, 00:07:13.106 "base_bdevs_list": [ 00:07:13.106 { 00:07:13.106 "name": "BaseBdev1", 00:07:13.106 "uuid": "7abc9b57-3f2d-44cc-91f7-d81fd6caad2c", 00:07:13.106 "is_configured": true, 00:07:13.106 "data_offset": 2048, 00:07:13.106 "data_size": 63488 00:07:13.106 }, 00:07:13.106 { 00:07:13.106 "name": "BaseBdev2", 00:07:13.106 "uuid": "bd51a5ad-d930-4bb0-8511-528f32a1e1d0", 00:07:13.106 "is_configured": true, 00:07:13.106 "data_offset": 2048, 00:07:13.106 "data_size": 63488 00:07:13.106 } 00:07:13.106 ] 00:07:13.106 } 00:07:13.106 } 00:07:13.106 }' 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:13.106 BaseBdev2' 00:07:13.106 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.366 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.367 [2024-11-26 15:23:11.706157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:13.367 [2024-11-26 15:23:11.706192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.367 [2024-11-26 15:23:11.706253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.367 "name": "Existed_Raid", 00:07:13.367 "uuid": "2101416f-0f40-48ac-890d-425a72d06ea8", 00:07:13.367 "strip_size_kb": 64, 00:07:13.367 "state": "offline", 00:07:13.367 "raid_level": "raid0", 00:07:13.367 "superblock": true, 00:07:13.367 "num_base_bdevs": 2, 00:07:13.367 "num_base_bdevs_discovered": 1, 00:07:13.367 "num_base_bdevs_operational": 1, 00:07:13.367 "base_bdevs_list": [ 00:07:13.367 { 00:07:13.367 "name": null, 00:07:13.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.367 "is_configured": false, 00:07:13.367 "data_offset": 0, 00:07:13.367 "data_size": 63488 00:07:13.367 }, 00:07:13.367 { 00:07:13.367 "name": "BaseBdev2", 00:07:13.367 "uuid": "bd51a5ad-d930-4bb0-8511-528f32a1e1d0", 00:07:13.367 "is_configured": true, 00:07:13.367 "data_offset": 2048, 00:07:13.367 "data_size": 63488 00:07:13.367 } 00:07:13.367 ] 00:07:13.367 }' 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.367 15:23:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 [2024-11-26 15:23:12.201539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:13.937 [2024-11-26 15:23:12.201591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73964 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73964 ']' 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73964 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73964 00:07:13.937 killing process with pid 73964 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.937 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.938 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73964' 00:07:13.938 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73964 00:07:13.938 [2024-11-26 15:23:12.293944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.938 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73964 00:07:13.938 [2024-11-26 15:23:12.294932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.197 15:23:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:14.197 ************************************ 00:07:14.197 END TEST raid_state_function_test_sb 00:07:14.197 00:07:14.197 real 0m3.680s 00:07:14.197 user 0m5.763s 00:07:14.197 sys 0m0.726s 00:07:14.197 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.197 15:23:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.197 ************************************ 00:07:14.197 15:23:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:14.197 15:23:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:14.197 15:23:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.197 15:23:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.197 ************************************ 00:07:14.197 START TEST raid_superblock_test 00:07:14.198 ************************************ 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74200 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74200 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74200 ']' 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.198 15:23:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.198 [2024-11-26 15:23:12.666080] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:14.198 [2024-11-26 15:23:12.666660] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74200 ] 00:07:14.458 [2024-11-26 15:23:12.800925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.458 [2024-11-26 15:23:12.840277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.458 [2024-11-26 15:23:12.864662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.458 [2024-11-26 15:23:12.906694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.458 [2024-11-26 15:23:12.906803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.027 malloc1 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.027 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.287 [2024-11-26 15:23:13.502062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:15.287 [2024-11-26 15:23:13.502167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.287 [2024-11-26 15:23:13.502243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:15.287 [2024-11-26 15:23:13.502275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.287 [2024-11-26 15:23:13.504300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.287 [2024-11-26 15:23:13.504363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:15.287 pt1 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.287 malloc2 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.287 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.287 [2024-11-26 15:23:13.530591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:15.287 [2024-11-26 15:23:13.530677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.287 [2024-11-26 15:23:13.530728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:15.287 [2024-11-26 15:23:13.530755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.287 [2024-11-26 15:23:13.532732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.287 [2024-11-26 15:23:13.532809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:15.287 pt2 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.288 [2024-11-26 15:23:13.542633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:15.288 [2024-11-26 15:23:13.544453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:15.288 [2024-11-26 15:23:13.544645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:15.288 [2024-11-26 15:23:13.544690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.288 [2024-11-26 15:23:13.544965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:15.288 [2024-11-26 15:23:13.545127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:15.288 [2024-11-26 15:23:13.545184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:15.288 [2024-11-26 15:23:13.545341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.288 "name": "raid_bdev1", 00:07:15.288 "uuid": "80874f00-07a5-48b7-9a36-fbaee24b458e", 00:07:15.288 "strip_size_kb": 64, 00:07:15.288 "state": "online", 00:07:15.288 "raid_level": "raid0", 00:07:15.288 "superblock": true, 00:07:15.288 "num_base_bdevs": 2, 00:07:15.288 "num_base_bdevs_discovered": 2, 00:07:15.288 "num_base_bdevs_operational": 2, 00:07:15.288 "base_bdevs_list": [ 00:07:15.288 { 00:07:15.288 "name": "pt1", 00:07:15.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.288 "is_configured": true, 00:07:15.288 "data_offset": 2048, 00:07:15.288 "data_size": 63488 00:07:15.288 }, 00:07:15.288 { 00:07:15.288 "name": "pt2", 00:07:15.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.288 "is_configured": true, 00:07:15.288 "data_offset": 2048, 00:07:15.288 "data_size": 63488 00:07:15.288 } 00:07:15.288 ] 00:07:15.288 }' 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.288 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.547 [2024-11-26 15:23:13.967004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.547 "name": "raid_bdev1", 00:07:15.547 "aliases": [ 00:07:15.547 "80874f00-07a5-48b7-9a36-fbaee24b458e" 00:07:15.547 ], 00:07:15.547 "product_name": "Raid Volume", 00:07:15.547 "block_size": 512, 00:07:15.547 "num_blocks": 126976, 00:07:15.547 "uuid": "80874f00-07a5-48b7-9a36-fbaee24b458e", 00:07:15.547 "assigned_rate_limits": { 00:07:15.547 "rw_ios_per_sec": 0, 00:07:15.547 "rw_mbytes_per_sec": 0, 00:07:15.547 "r_mbytes_per_sec": 0, 00:07:15.547 "w_mbytes_per_sec": 0 00:07:15.547 }, 00:07:15.547 "claimed": false, 00:07:15.547 "zoned": false, 00:07:15.547 "supported_io_types": { 00:07:15.547 "read": true, 00:07:15.547 "write": true, 00:07:15.547 "unmap": true, 00:07:15.547 "flush": true, 00:07:15.547 "reset": true, 00:07:15.547 "nvme_admin": false, 00:07:15.547 "nvme_io": false, 00:07:15.547 "nvme_io_md": false, 00:07:15.547 "write_zeroes": true, 00:07:15.547 "zcopy": false, 00:07:15.547 "get_zone_info": false, 00:07:15.547 "zone_management": false, 00:07:15.547 "zone_append": false, 00:07:15.547 "compare": false, 00:07:15.547 "compare_and_write": false, 00:07:15.547 "abort": false, 00:07:15.547 "seek_hole": false, 00:07:15.547 "seek_data": false, 00:07:15.547 "copy": false, 00:07:15.547 "nvme_iov_md": false 00:07:15.547 }, 00:07:15.547 "memory_domains": [ 00:07:15.547 { 00:07:15.547 "dma_device_id": "system", 00:07:15.547 "dma_device_type": 1 00:07:15.547 }, 00:07:15.547 { 00:07:15.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.547 "dma_device_type": 2 00:07:15.547 }, 00:07:15.547 { 00:07:15.547 "dma_device_id": "system", 00:07:15.547 "dma_device_type": 1 00:07:15.547 }, 00:07:15.547 { 00:07:15.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.547 "dma_device_type": 2 00:07:15.547 } 00:07:15.547 ], 00:07:15.547 "driver_specific": { 00:07:15.547 "raid": { 00:07:15.547 "uuid": "80874f00-07a5-48b7-9a36-fbaee24b458e", 00:07:15.547 "strip_size_kb": 64, 00:07:15.547 "state": "online", 00:07:15.547 "raid_level": "raid0", 00:07:15.547 "superblock": true, 00:07:15.547 "num_base_bdevs": 2, 00:07:15.547 "num_base_bdevs_discovered": 2, 00:07:15.547 "num_base_bdevs_operational": 2, 00:07:15.547 "base_bdevs_list": [ 00:07:15.547 { 00:07:15.547 "name": "pt1", 00:07:15.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.547 "is_configured": true, 00:07:15.547 "data_offset": 2048, 00:07:15.547 "data_size": 63488 00:07:15.547 }, 00:07:15.547 { 00:07:15.547 "name": "pt2", 00:07:15.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.547 "is_configured": true, 00:07:15.547 "data_offset": 2048, 00:07:15.547 "data_size": 63488 00:07:15.547 } 00:07:15.547 ] 00:07:15.547 } 00:07:15.547 } 00:07:15.547 }' 00:07:15.547 15:23:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:15.805 pt2' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:15.805 [2024-11-26 15:23:14.186995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=80874f00-07a5-48b7-9a36-fbaee24b458e 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 80874f00-07a5-48b7-9a36-fbaee24b458e ']' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.805 [2024-11-26 15:23:14.234774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:15.805 [2024-11-26 15:23:14.234797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.805 [2024-11-26 15:23:14.234873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.805 [2024-11-26 15:23:14.234919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.805 [2024-11-26 15:23:14.234932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:15.805 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:15.806 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:15.806 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:15.806 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.806 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 [2024-11-26 15:23:14.354838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:16.085 [2024-11-26 15:23:14.356676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:16.085 [2024-11-26 15:23:14.356770] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:16.085 [2024-11-26 15:23:14.356871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:16.085 [2024-11-26 15:23:14.356930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.085 [2024-11-26 15:23:14.356970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:07:16.085 request: 00:07:16.085 { 00:07:16.085 "name": "raid_bdev1", 00:07:16.085 "raid_level": "raid0", 00:07:16.085 "base_bdevs": [ 00:07:16.085 "malloc1", 00:07:16.085 "malloc2" 00:07:16.085 ], 00:07:16.085 "strip_size_kb": 64, 00:07:16.085 "superblock": false, 00:07:16.085 "method": "bdev_raid_create", 00:07:16.085 "req_id": 1 00:07:16.085 } 00:07:16.085 Got JSON-RPC error response 00:07:16.085 response: 00:07:16.085 { 00:07:16.085 "code": -17, 00:07:16.085 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:16.085 } 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 [2024-11-26 15:23:14.418835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:16.085 [2024-11-26 15:23:14.418921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.085 [2024-11-26 15:23:14.418956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:16.085 [2024-11-26 15:23:14.418969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.085 [2024-11-26 15:23:14.421059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.085 [2024-11-26 15:23:14.421097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:16.085 [2024-11-26 15:23:14.421155] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:16.085 [2024-11-26 15:23:14.421203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:16.085 pt1 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.085 "name": "raid_bdev1", 00:07:16.085 "uuid": "80874f00-07a5-48b7-9a36-fbaee24b458e", 00:07:16.085 "strip_size_kb": 64, 00:07:16.085 "state": "configuring", 00:07:16.085 "raid_level": "raid0", 00:07:16.085 "superblock": true, 00:07:16.085 "num_base_bdevs": 2, 00:07:16.085 "num_base_bdevs_discovered": 1, 00:07:16.085 "num_base_bdevs_operational": 2, 00:07:16.085 "base_bdevs_list": [ 00:07:16.085 { 00:07:16.085 "name": "pt1", 00:07:16.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.085 "is_configured": true, 00:07:16.085 "data_offset": 2048, 00:07:16.085 "data_size": 63488 00:07:16.085 }, 00:07:16.085 { 00:07:16.085 "name": null, 00:07:16.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.085 "is_configured": false, 00:07:16.085 "data_offset": 2048, 00:07:16.085 "data_size": 63488 00:07:16.085 } 00:07:16.085 ] 00:07:16.085 }' 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.085 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.651 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:16.651 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:16.651 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:16.651 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:16.651 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.651 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.651 [2024-11-26 15:23:14.854945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:16.651 [2024-11-26 15:23:14.855050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.651 [2024-11-26 15:23:14.855087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:16.651 [2024-11-26 15:23:14.855116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.651 [2024-11-26 15:23:14.855524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.651 [2024-11-26 15:23:14.855584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:16.652 [2024-11-26 15:23:14.855676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:16.652 [2024-11-26 15:23:14.855725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:16.652 [2024-11-26 15:23:14.855835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:16.652 [2024-11-26 15:23:14.855874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.652 [2024-11-26 15:23:14.856119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:16.652 [2024-11-26 15:23:14.856282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:16.652 [2024-11-26 15:23:14.856325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:16.652 [2024-11-26 15:23:14.856463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.652 pt2 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.652 "name": "raid_bdev1", 00:07:16.652 "uuid": "80874f00-07a5-48b7-9a36-fbaee24b458e", 00:07:16.652 "strip_size_kb": 64, 00:07:16.652 "state": "online", 00:07:16.652 "raid_level": "raid0", 00:07:16.652 "superblock": true, 00:07:16.652 "num_base_bdevs": 2, 00:07:16.652 "num_base_bdevs_discovered": 2, 00:07:16.652 "num_base_bdevs_operational": 2, 00:07:16.652 "base_bdevs_list": [ 00:07:16.652 { 00:07:16.652 "name": "pt1", 00:07:16.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.652 "is_configured": true, 00:07:16.652 "data_offset": 2048, 00:07:16.652 "data_size": 63488 00:07:16.652 }, 00:07:16.652 { 00:07:16.652 "name": "pt2", 00:07:16.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.652 "is_configured": true, 00:07:16.652 "data_offset": 2048, 00:07:16.652 "data_size": 63488 00:07:16.652 } 00:07:16.652 ] 00:07:16.652 }' 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.652 15:23:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.911 [2024-11-26 15:23:15.263324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.911 "name": "raid_bdev1", 00:07:16.911 "aliases": [ 00:07:16.911 "80874f00-07a5-48b7-9a36-fbaee24b458e" 00:07:16.911 ], 00:07:16.911 "product_name": "Raid Volume", 00:07:16.911 "block_size": 512, 00:07:16.911 "num_blocks": 126976, 00:07:16.911 "uuid": "80874f00-07a5-48b7-9a36-fbaee24b458e", 00:07:16.911 "assigned_rate_limits": { 00:07:16.911 "rw_ios_per_sec": 0, 00:07:16.911 "rw_mbytes_per_sec": 0, 00:07:16.911 "r_mbytes_per_sec": 0, 00:07:16.911 "w_mbytes_per_sec": 0 00:07:16.911 }, 00:07:16.911 "claimed": false, 00:07:16.911 "zoned": false, 00:07:16.911 "supported_io_types": { 00:07:16.911 "read": true, 00:07:16.911 "write": true, 00:07:16.911 "unmap": true, 00:07:16.911 "flush": true, 00:07:16.911 "reset": true, 00:07:16.911 "nvme_admin": false, 00:07:16.911 "nvme_io": false, 00:07:16.911 "nvme_io_md": false, 00:07:16.911 "write_zeroes": true, 00:07:16.911 "zcopy": false, 00:07:16.911 "get_zone_info": false, 00:07:16.911 "zone_management": false, 00:07:16.911 "zone_append": false, 00:07:16.911 "compare": false, 00:07:16.911 "compare_and_write": false, 00:07:16.911 "abort": false, 00:07:16.911 "seek_hole": false, 00:07:16.911 "seek_data": false, 00:07:16.911 "copy": false, 00:07:16.911 "nvme_iov_md": false 00:07:16.911 }, 00:07:16.911 "memory_domains": [ 00:07:16.911 { 00:07:16.911 "dma_device_id": "system", 00:07:16.911 "dma_device_type": 1 00:07:16.911 }, 00:07:16.911 { 00:07:16.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.911 "dma_device_type": 2 00:07:16.911 }, 00:07:16.911 { 00:07:16.911 "dma_device_id": "system", 00:07:16.911 "dma_device_type": 1 00:07:16.911 }, 00:07:16.911 { 00:07:16.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.911 "dma_device_type": 2 00:07:16.911 } 00:07:16.911 ], 00:07:16.911 "driver_specific": { 00:07:16.911 "raid": { 00:07:16.911 "uuid": "80874f00-07a5-48b7-9a36-fbaee24b458e", 00:07:16.911 "strip_size_kb": 64, 00:07:16.911 "state": "online", 00:07:16.911 "raid_level": "raid0", 00:07:16.911 "superblock": true, 00:07:16.911 "num_base_bdevs": 2, 00:07:16.911 "num_base_bdevs_discovered": 2, 00:07:16.911 "num_base_bdevs_operational": 2, 00:07:16.911 "base_bdevs_list": [ 00:07:16.911 { 00:07:16.911 "name": "pt1", 00:07:16.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.911 "is_configured": true, 00:07:16.911 "data_offset": 2048, 00:07:16.911 "data_size": 63488 00:07:16.911 }, 00:07:16.911 { 00:07:16.911 "name": "pt2", 00:07:16.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.911 "is_configured": true, 00:07:16.911 "data_offset": 2048, 00:07:16.911 "data_size": 63488 00:07:16.911 } 00:07:16.911 ] 00:07:16.911 } 00:07:16.911 } 00:07:16.911 }' 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:16.911 pt2' 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.911 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:17.169 [2024-11-26 15:23:15.475346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 80874f00-07a5-48b7-9a36-fbaee24b458e '!=' 80874f00-07a5-48b7-9a36-fbaee24b458e ']' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74200 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74200 ']' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74200 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74200 00:07:17.169 killing process with pid 74200 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74200' 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74200 00:07:17.169 [2024-11-26 15:23:15.562252] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.169 [2024-11-26 15:23:15.562327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.169 [2024-11-26 15:23:15.562370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.169 [2024-11-26 15:23:15.562394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:17.169 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74200 00:07:17.169 [2024-11-26 15:23:15.584411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.427 15:23:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:17.427 00:07:17.427 real 0m3.220s 00:07:17.427 user 0m4.977s 00:07:17.427 sys 0m0.691s 00:07:17.427 ************************************ 00:07:17.427 END TEST raid_superblock_test 00:07:17.427 ************************************ 00:07:17.427 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.427 15:23:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.427 15:23:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:17.427 15:23:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.427 15:23:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.427 15:23:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.427 ************************************ 00:07:17.427 START TEST raid_read_error_test 00:07:17.427 ************************************ 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yNce1pJuoF 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74395 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74395 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74395 ']' 00:07:17.427 15:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.428 15:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.428 15:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.428 15:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.428 15:23:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.687 [2024-11-26 15:23:15.967747] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:17.687 [2024-11-26 15:23:15.967861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74395 ] 00:07:17.687 [2024-11-26 15:23:16.100789] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.687 [2024-11-26 15:23:16.123687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.687 [2024-11-26 15:23:16.149128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.946 [2024-11-26 15:23:16.192398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.946 [2024-11-26 15:23:16.192432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 BaseBdev1_malloc 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 true 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 [2024-11-26 15:23:16.820139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:18.516 [2024-11-26 15:23:16.820222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.516 [2024-11-26 15:23:16.820263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:18.516 [2024-11-26 15:23:16.820277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.516 [2024-11-26 15:23:16.822300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.516 [2024-11-26 15:23:16.822336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:18.516 BaseBdev1 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 BaseBdev2_malloc 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 true 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 [2024-11-26 15:23:16.860564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:18.516 [2024-11-26 15:23:16.860628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.516 [2024-11-26 15:23:16.860643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:18.516 [2024-11-26 15:23:16.860659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.516 [2024-11-26 15:23:16.862650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.516 [2024-11-26 15:23:16.862685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:18.516 BaseBdev2 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 [2024-11-26 15:23:16.872597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.516 [2024-11-26 15:23:16.874422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.516 [2024-11-26 15:23:16.874603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:18.516 [2024-11-26 15:23:16.874617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.516 [2024-11-26 15:23:16.874857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:18.516 [2024-11-26 15:23:16.875008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:18.516 [2024-11-26 15:23:16.875022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:18.516 [2024-11-26 15:23:16.875152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.516 "name": "raid_bdev1", 00:07:18.516 "uuid": "5f07f1f8-e5ec-4069-91b6-cf12ae9c7ae4", 00:07:18.516 "strip_size_kb": 64, 00:07:18.516 "state": "online", 00:07:18.516 "raid_level": "raid0", 00:07:18.516 "superblock": true, 00:07:18.516 "num_base_bdevs": 2, 00:07:18.516 "num_base_bdevs_discovered": 2, 00:07:18.516 "num_base_bdevs_operational": 2, 00:07:18.516 "base_bdevs_list": [ 00:07:18.516 { 00:07:18.516 "name": "BaseBdev1", 00:07:18.516 "uuid": "1621a4e9-7879-5cef-9840-9c43fdacdfc9", 00:07:18.516 "is_configured": true, 00:07:18.516 "data_offset": 2048, 00:07:18.516 "data_size": 63488 00:07:18.516 }, 00:07:18.516 { 00:07:18.516 "name": "BaseBdev2", 00:07:18.516 "uuid": "9e0757b3-1e58-5a18-9156-a4a523d02cd7", 00:07:18.516 "is_configured": true, 00:07:18.516 "data_offset": 2048, 00:07:18.516 "data_size": 63488 00:07:18.516 } 00:07:18.516 ] 00:07:18.516 }' 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.516 15:23:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.086 15:23:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:19.086 15:23:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:19.086 [2024-11-26 15:23:17.437130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.027 "name": "raid_bdev1", 00:07:20.027 "uuid": "5f07f1f8-e5ec-4069-91b6-cf12ae9c7ae4", 00:07:20.027 "strip_size_kb": 64, 00:07:20.027 "state": "online", 00:07:20.027 "raid_level": "raid0", 00:07:20.027 "superblock": true, 00:07:20.027 "num_base_bdevs": 2, 00:07:20.027 "num_base_bdevs_discovered": 2, 00:07:20.027 "num_base_bdevs_operational": 2, 00:07:20.027 "base_bdevs_list": [ 00:07:20.027 { 00:07:20.027 "name": "BaseBdev1", 00:07:20.027 "uuid": "1621a4e9-7879-5cef-9840-9c43fdacdfc9", 00:07:20.027 "is_configured": true, 00:07:20.027 "data_offset": 2048, 00:07:20.027 "data_size": 63488 00:07:20.027 }, 00:07:20.027 { 00:07:20.027 "name": "BaseBdev2", 00:07:20.027 "uuid": "9e0757b3-1e58-5a18-9156-a4a523d02cd7", 00:07:20.027 "is_configured": true, 00:07:20.027 "data_offset": 2048, 00:07:20.027 "data_size": 63488 00:07:20.027 } 00:07:20.027 ] 00:07:20.027 }' 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.027 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.597 [2024-11-26 15:23:18.815392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.597 [2024-11-26 15:23:18.815429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.597 [2024-11-26 15:23:18.817937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.597 [2024-11-26 15:23:18.817995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.597 [2024-11-26 15:23:18.818026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.597 [2024-11-26 15:23:18.818037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:20.597 { 00:07:20.597 "results": [ 00:07:20.597 { 00:07:20.597 "job": "raid_bdev1", 00:07:20.597 "core_mask": "0x1", 00:07:20.597 "workload": "randrw", 00:07:20.597 "percentage": 50, 00:07:20.597 "status": "finished", 00:07:20.597 "queue_depth": 1, 00:07:20.597 "io_size": 131072, 00:07:20.597 "runtime": 1.376377, 00:07:20.597 "iops": 18229.743740268837, 00:07:20.597 "mibps": 2278.7179675336047, 00:07:20.597 "io_failed": 1, 00:07:20.597 "io_timeout": 0, 00:07:20.597 "avg_latency_us": 75.83389492048848, 00:07:20.597 "min_latency_us": 24.321450361718817, 00:07:20.597 "max_latency_us": 1385.2070077573433 00:07:20.597 } 00:07:20.597 ], 00:07:20.597 "core_count": 1 00:07:20.597 } 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74395 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74395 ']' 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74395 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74395 00:07:20.597 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.598 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.598 killing process with pid 74395 00:07:20.598 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74395' 00:07:20.598 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74395 00:07:20.598 [2024-11-26 15:23:18.860061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.598 15:23:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74395 00:07:20.598 [2024-11-26 15:23:18.874884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yNce1pJuoF 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:20.857 00:07:20.857 real 0m3.221s 00:07:20.857 user 0m4.138s 00:07:20.857 sys 0m0.483s 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.857 15:23:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.857 ************************************ 00:07:20.857 END TEST raid_read_error_test 00:07:20.857 ************************************ 00:07:20.857 15:23:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:20.857 15:23:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:20.857 15:23:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.857 15:23:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.857 ************************************ 00:07:20.857 START TEST raid_write_error_test 00:07:20.857 ************************************ 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:20.857 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CiTzWiFcDG 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74524 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74524 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74524 ']' 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.858 15:23:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.858 [2024-11-26 15:23:19.263063] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:20.858 [2024-11-26 15:23:19.263196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74524 ] 00:07:21.117 [2024-11-26 15:23:19.397973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.117 [2024-11-26 15:23:19.435460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.117 [2024-11-26 15:23:19.460477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.117 [2024-11-26 15:23:19.502700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.117 [2024-11-26 15:23:19.502740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.686 BaseBdev1_malloc 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.686 true 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.686 [2024-11-26 15:23:20.105989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:21.686 [2024-11-26 15:23:20.106047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.686 [2024-11-26 15:23:20.106085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:21.686 [2024-11-26 15:23:20.106104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.686 [2024-11-26 15:23:20.108146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.686 [2024-11-26 15:23:20.108190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:21.686 BaseBdev1 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.686 BaseBdev2_malloc 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.686 true 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.686 [2024-11-26 15:23:20.146572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:21.686 [2024-11-26 15:23:20.146625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.686 [2024-11-26 15:23:20.146656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:21.686 [2024-11-26 15:23:20.146667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.686 [2024-11-26 15:23:20.148722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.686 [2024-11-26 15:23:20.148757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:21.686 BaseBdev2 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.686 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.686 [2024-11-26 15:23:20.158600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.946 [2024-11-26 15:23:20.160474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.946 [2024-11-26 15:23:20.160654] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:21.946 [2024-11-26 15:23:20.160678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.946 [2024-11-26 15:23:20.160921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:21.947 [2024-11-26 15:23:20.161104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:21.947 [2024-11-26 15:23:20.161122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:21.947 [2024-11-26 15:23:20.161282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.947 "name": "raid_bdev1", 00:07:21.947 "uuid": "ee927512-75d7-4082-9202-7955fdc659fb", 00:07:21.947 "strip_size_kb": 64, 00:07:21.947 "state": "online", 00:07:21.947 "raid_level": "raid0", 00:07:21.947 "superblock": true, 00:07:21.947 "num_base_bdevs": 2, 00:07:21.947 "num_base_bdevs_discovered": 2, 00:07:21.947 "num_base_bdevs_operational": 2, 00:07:21.947 "base_bdevs_list": [ 00:07:21.947 { 00:07:21.947 "name": "BaseBdev1", 00:07:21.947 "uuid": "467f4389-eda0-5b14-b846-cbe7572699d6", 00:07:21.947 "is_configured": true, 00:07:21.947 "data_offset": 2048, 00:07:21.947 "data_size": 63488 00:07:21.947 }, 00:07:21.947 { 00:07:21.947 "name": "BaseBdev2", 00:07:21.947 "uuid": "233ec047-e48b-5cc9-95de-bf9968ee9313", 00:07:21.947 "is_configured": true, 00:07:21.947 "data_offset": 2048, 00:07:21.947 "data_size": 63488 00:07:21.947 } 00:07:21.947 ] 00:07:21.947 }' 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.947 15:23:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.207 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:22.207 15:23:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:22.467 [2024-11-26 15:23:20.715092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.407 "name": "raid_bdev1", 00:07:23.407 "uuid": "ee927512-75d7-4082-9202-7955fdc659fb", 00:07:23.407 "strip_size_kb": 64, 00:07:23.407 "state": "online", 00:07:23.407 "raid_level": "raid0", 00:07:23.407 "superblock": true, 00:07:23.407 "num_base_bdevs": 2, 00:07:23.407 "num_base_bdevs_discovered": 2, 00:07:23.407 "num_base_bdevs_operational": 2, 00:07:23.407 "base_bdevs_list": [ 00:07:23.407 { 00:07:23.407 "name": "BaseBdev1", 00:07:23.407 "uuid": "467f4389-eda0-5b14-b846-cbe7572699d6", 00:07:23.407 "is_configured": true, 00:07:23.407 "data_offset": 2048, 00:07:23.407 "data_size": 63488 00:07:23.407 }, 00:07:23.407 { 00:07:23.407 "name": "BaseBdev2", 00:07:23.407 "uuid": "233ec047-e48b-5cc9-95de-bf9968ee9313", 00:07:23.407 "is_configured": true, 00:07:23.407 "data_offset": 2048, 00:07:23.407 "data_size": 63488 00:07:23.407 } 00:07:23.407 ] 00:07:23.407 }' 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.407 15:23:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.667 [2024-11-26 15:23:22.081408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.667 [2024-11-26 15:23:22.081446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.667 [2024-11-26 15:23:22.083934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.667 [2024-11-26 15:23:22.083989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.667 [2024-11-26 15:23:22.084021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.667 [2024-11-26 15:23:22.084038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:23.667 { 00:07:23.667 "results": [ 00:07:23.667 { 00:07:23.667 "job": "raid_bdev1", 00:07:23.667 "core_mask": "0x1", 00:07:23.667 "workload": "randrw", 00:07:23.667 "percentage": 50, 00:07:23.667 "status": "finished", 00:07:23.667 "queue_depth": 1, 00:07:23.667 "io_size": 131072, 00:07:23.667 "runtime": 1.36443, 00:07:23.667 "iops": 18069.083793232337, 00:07:23.667 "mibps": 2258.635474154042, 00:07:23.667 "io_failed": 1, 00:07:23.667 "io_timeout": 0, 00:07:23.667 "avg_latency_us": 76.61428817657526, 00:07:23.667 "min_latency_us": 24.20988407565589, 00:07:23.667 "max_latency_us": 2656.170138586246 00:07:23.667 } 00:07:23.667 ], 00:07:23.667 "core_count": 1 00:07:23.667 } 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74524 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74524 ']' 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74524 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.667 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74524 00:07:23.668 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.668 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.668 killing process with pid 74524 00:07:23.668 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74524' 00:07:23.668 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74524 00:07:23.668 [2024-11-26 15:23:22.117690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.668 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74524 00:07:23.668 [2024-11-26 15:23:22.132346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CiTzWiFcDG 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:23.928 00:07:23.928 real 0m3.180s 00:07:23.928 user 0m4.074s 00:07:23.928 sys 0m0.484s 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.928 15:23:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.928 ************************************ 00:07:23.928 END TEST raid_write_error_test 00:07:23.928 ************************************ 00:07:23.928 15:23:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:23.928 15:23:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:23.928 15:23:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.928 15:23:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.928 15:23:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.188 ************************************ 00:07:24.188 START TEST raid_state_function_test 00:07:24.188 ************************************ 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74657 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74657' 00:07:24.188 Process raid pid: 74657 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74657 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74657 ']' 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.188 15:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.188 [2024-11-26 15:23:22.499219] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:24.188 [2024-11-26 15:23:22.499356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.188 [2024-11-26 15:23:22.634448] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.448 [2024-11-26 15:23:22.673453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.448 [2024-11-26 15:23:22.698117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.448 [2024-11-26 15:23:22.740091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.448 [2024-11-26 15:23:22.740142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.016 [2024-11-26 15:23:23.322681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.016 [2024-11-26 15:23:23.322730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.016 [2024-11-26 15:23:23.322743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.016 [2024-11-26 15:23:23.322766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.016 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.016 "name": "Existed_Raid", 00:07:25.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.017 "strip_size_kb": 64, 00:07:25.017 "state": "configuring", 00:07:25.017 "raid_level": "concat", 00:07:25.017 "superblock": false, 00:07:25.017 "num_base_bdevs": 2, 00:07:25.017 "num_base_bdevs_discovered": 0, 00:07:25.017 "num_base_bdevs_operational": 2, 00:07:25.017 "base_bdevs_list": [ 00:07:25.017 { 00:07:25.017 "name": "BaseBdev1", 00:07:25.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.017 "is_configured": false, 00:07:25.017 "data_offset": 0, 00:07:25.017 "data_size": 0 00:07:25.017 }, 00:07:25.017 { 00:07:25.017 "name": "BaseBdev2", 00:07:25.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.017 "is_configured": false, 00:07:25.017 "data_offset": 0, 00:07:25.017 "data_size": 0 00:07:25.017 } 00:07:25.017 ] 00:07:25.017 }' 00:07:25.017 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.017 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 [2024-11-26 15:23:23.774708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.588 [2024-11-26 15:23:23.774743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 [2024-11-26 15:23:23.786737] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.588 [2024-11-26 15:23:23.786774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.588 [2024-11-26 15:23:23.786786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.588 [2024-11-26 15:23:23.786793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 [2024-11-26 15:23:23.807521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.588 BaseBdev1 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.588 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.588 [ 00:07:25.588 { 00:07:25.588 "name": "BaseBdev1", 00:07:25.588 "aliases": [ 00:07:25.588 "036b1cf3-3fff-4035-bb85-319693b55a7e" 00:07:25.588 ], 00:07:25.588 "product_name": "Malloc disk", 00:07:25.588 "block_size": 512, 00:07:25.588 "num_blocks": 65536, 00:07:25.588 "uuid": "036b1cf3-3fff-4035-bb85-319693b55a7e", 00:07:25.588 "assigned_rate_limits": { 00:07:25.588 "rw_ios_per_sec": 0, 00:07:25.588 "rw_mbytes_per_sec": 0, 00:07:25.588 "r_mbytes_per_sec": 0, 00:07:25.588 "w_mbytes_per_sec": 0 00:07:25.589 }, 00:07:25.589 "claimed": true, 00:07:25.589 "claim_type": "exclusive_write", 00:07:25.589 "zoned": false, 00:07:25.589 "supported_io_types": { 00:07:25.589 "read": true, 00:07:25.589 "write": true, 00:07:25.589 "unmap": true, 00:07:25.589 "flush": true, 00:07:25.589 "reset": true, 00:07:25.589 "nvme_admin": false, 00:07:25.589 "nvme_io": false, 00:07:25.589 "nvme_io_md": false, 00:07:25.589 "write_zeroes": true, 00:07:25.589 "zcopy": true, 00:07:25.589 "get_zone_info": false, 00:07:25.589 "zone_management": false, 00:07:25.589 "zone_append": false, 00:07:25.589 "compare": false, 00:07:25.589 "compare_and_write": false, 00:07:25.589 "abort": true, 00:07:25.589 "seek_hole": false, 00:07:25.589 "seek_data": false, 00:07:25.589 "copy": true, 00:07:25.589 "nvme_iov_md": false 00:07:25.589 }, 00:07:25.589 "memory_domains": [ 00:07:25.589 { 00:07:25.589 "dma_device_id": "system", 00:07:25.589 "dma_device_type": 1 00:07:25.589 }, 00:07:25.589 { 00:07:25.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.589 "dma_device_type": 2 00:07:25.589 } 00:07:25.589 ], 00:07:25.589 "driver_specific": {} 00:07:25.589 } 00:07:25.589 ] 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.589 "name": "Existed_Raid", 00:07:25.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.589 "strip_size_kb": 64, 00:07:25.589 "state": "configuring", 00:07:25.589 "raid_level": "concat", 00:07:25.589 "superblock": false, 00:07:25.589 "num_base_bdevs": 2, 00:07:25.589 "num_base_bdevs_discovered": 1, 00:07:25.589 "num_base_bdevs_operational": 2, 00:07:25.589 "base_bdevs_list": [ 00:07:25.589 { 00:07:25.589 "name": "BaseBdev1", 00:07:25.589 "uuid": "036b1cf3-3fff-4035-bb85-319693b55a7e", 00:07:25.589 "is_configured": true, 00:07:25.589 "data_offset": 0, 00:07:25.589 "data_size": 65536 00:07:25.589 }, 00:07:25.589 { 00:07:25.589 "name": "BaseBdev2", 00:07:25.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.589 "is_configured": false, 00:07:25.589 "data_offset": 0, 00:07:25.589 "data_size": 0 00:07:25.589 } 00:07:25.589 ] 00:07:25.589 }' 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.589 15:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 [2024-11-26 15:23:24.283678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.849 [2024-11-26 15:23:24.283731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 [2024-11-26 15:23:24.295718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.849 [2024-11-26 15:23:24.297543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.849 [2024-11-26 15:23:24.297583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.109 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.109 "name": "Existed_Raid", 00:07:26.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.109 "strip_size_kb": 64, 00:07:26.109 "state": "configuring", 00:07:26.109 "raid_level": "concat", 00:07:26.109 "superblock": false, 00:07:26.109 "num_base_bdevs": 2, 00:07:26.109 "num_base_bdevs_discovered": 1, 00:07:26.109 "num_base_bdevs_operational": 2, 00:07:26.109 "base_bdevs_list": [ 00:07:26.109 { 00:07:26.109 "name": "BaseBdev1", 00:07:26.109 "uuid": "036b1cf3-3fff-4035-bb85-319693b55a7e", 00:07:26.109 "is_configured": true, 00:07:26.109 "data_offset": 0, 00:07:26.109 "data_size": 65536 00:07:26.109 }, 00:07:26.109 { 00:07:26.109 "name": "BaseBdev2", 00:07:26.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.109 "is_configured": false, 00:07:26.109 "data_offset": 0, 00:07:26.109 "data_size": 0 00:07:26.109 } 00:07:26.109 ] 00:07:26.109 }' 00:07:26.109 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.109 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.369 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.370 [2024-11-26 15:23:24.726795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.370 [2024-11-26 15:23:24.726847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:26.370 [2024-11-26 15:23:24.726861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:26.370 [2024-11-26 15:23:24.727140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:26.370 [2024-11-26 15:23:24.727297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:26.370 [2024-11-26 15:23:24.727313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:26.370 [2024-11-26 15:23:24.727498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.370 BaseBdev2 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.370 [ 00:07:26.370 { 00:07:26.370 "name": "BaseBdev2", 00:07:26.370 "aliases": [ 00:07:26.370 "186606f6-706a-488a-bd0f-ad480235fc2c" 00:07:26.370 ], 00:07:26.370 "product_name": "Malloc disk", 00:07:26.370 "block_size": 512, 00:07:26.370 "num_blocks": 65536, 00:07:26.370 "uuid": "186606f6-706a-488a-bd0f-ad480235fc2c", 00:07:26.370 "assigned_rate_limits": { 00:07:26.370 "rw_ios_per_sec": 0, 00:07:26.370 "rw_mbytes_per_sec": 0, 00:07:26.370 "r_mbytes_per_sec": 0, 00:07:26.370 "w_mbytes_per_sec": 0 00:07:26.370 }, 00:07:26.370 "claimed": true, 00:07:26.370 "claim_type": "exclusive_write", 00:07:26.370 "zoned": false, 00:07:26.370 "supported_io_types": { 00:07:26.370 "read": true, 00:07:26.370 "write": true, 00:07:26.370 "unmap": true, 00:07:26.370 "flush": true, 00:07:26.370 "reset": true, 00:07:26.370 "nvme_admin": false, 00:07:26.370 "nvme_io": false, 00:07:26.370 "nvme_io_md": false, 00:07:26.370 "write_zeroes": true, 00:07:26.370 "zcopy": true, 00:07:26.370 "get_zone_info": false, 00:07:26.370 "zone_management": false, 00:07:26.370 "zone_append": false, 00:07:26.370 "compare": false, 00:07:26.370 "compare_and_write": false, 00:07:26.370 "abort": true, 00:07:26.370 "seek_hole": false, 00:07:26.370 "seek_data": false, 00:07:26.370 "copy": true, 00:07:26.370 "nvme_iov_md": false 00:07:26.370 }, 00:07:26.370 "memory_domains": [ 00:07:26.370 { 00:07:26.370 "dma_device_id": "system", 00:07:26.370 "dma_device_type": 1 00:07:26.370 }, 00:07:26.370 { 00:07:26.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.370 "dma_device_type": 2 00:07:26.370 } 00:07:26.370 ], 00:07:26.370 "driver_specific": {} 00:07:26.370 } 00:07:26.370 ] 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.370 "name": "Existed_Raid", 00:07:26.370 "uuid": "25d3cc6b-468e-419f-afcf-4f00fa9b8173", 00:07:26.370 "strip_size_kb": 64, 00:07:26.370 "state": "online", 00:07:26.370 "raid_level": "concat", 00:07:26.370 "superblock": false, 00:07:26.370 "num_base_bdevs": 2, 00:07:26.370 "num_base_bdevs_discovered": 2, 00:07:26.370 "num_base_bdevs_operational": 2, 00:07:26.370 "base_bdevs_list": [ 00:07:26.370 { 00:07:26.370 "name": "BaseBdev1", 00:07:26.370 "uuid": "036b1cf3-3fff-4035-bb85-319693b55a7e", 00:07:26.370 "is_configured": true, 00:07:26.370 "data_offset": 0, 00:07:26.370 "data_size": 65536 00:07:26.370 }, 00:07:26.370 { 00:07:26.370 "name": "BaseBdev2", 00:07:26.370 "uuid": "186606f6-706a-488a-bd0f-ad480235fc2c", 00:07:26.370 "is_configured": true, 00:07:26.370 "data_offset": 0, 00:07:26.370 "data_size": 65536 00:07:26.370 } 00:07:26.370 ] 00:07:26.370 }' 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.370 15:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.942 [2024-11-26 15:23:25.179243] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.942 "name": "Existed_Raid", 00:07:26.942 "aliases": [ 00:07:26.942 "25d3cc6b-468e-419f-afcf-4f00fa9b8173" 00:07:26.942 ], 00:07:26.942 "product_name": "Raid Volume", 00:07:26.942 "block_size": 512, 00:07:26.942 "num_blocks": 131072, 00:07:26.942 "uuid": "25d3cc6b-468e-419f-afcf-4f00fa9b8173", 00:07:26.942 "assigned_rate_limits": { 00:07:26.942 "rw_ios_per_sec": 0, 00:07:26.942 "rw_mbytes_per_sec": 0, 00:07:26.942 "r_mbytes_per_sec": 0, 00:07:26.942 "w_mbytes_per_sec": 0 00:07:26.942 }, 00:07:26.942 "claimed": false, 00:07:26.942 "zoned": false, 00:07:26.942 "supported_io_types": { 00:07:26.942 "read": true, 00:07:26.942 "write": true, 00:07:26.942 "unmap": true, 00:07:26.942 "flush": true, 00:07:26.942 "reset": true, 00:07:26.942 "nvme_admin": false, 00:07:26.942 "nvme_io": false, 00:07:26.942 "nvme_io_md": false, 00:07:26.942 "write_zeroes": true, 00:07:26.942 "zcopy": false, 00:07:26.942 "get_zone_info": false, 00:07:26.942 "zone_management": false, 00:07:26.942 "zone_append": false, 00:07:26.942 "compare": false, 00:07:26.942 "compare_and_write": false, 00:07:26.942 "abort": false, 00:07:26.942 "seek_hole": false, 00:07:26.942 "seek_data": false, 00:07:26.942 "copy": false, 00:07:26.942 "nvme_iov_md": false 00:07:26.942 }, 00:07:26.942 "memory_domains": [ 00:07:26.942 { 00:07:26.942 "dma_device_id": "system", 00:07:26.942 "dma_device_type": 1 00:07:26.942 }, 00:07:26.942 { 00:07:26.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.942 "dma_device_type": 2 00:07:26.942 }, 00:07:26.942 { 00:07:26.942 "dma_device_id": "system", 00:07:26.942 "dma_device_type": 1 00:07:26.942 }, 00:07:26.942 { 00:07:26.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.942 "dma_device_type": 2 00:07:26.942 } 00:07:26.942 ], 00:07:26.942 "driver_specific": { 00:07:26.942 "raid": { 00:07:26.942 "uuid": "25d3cc6b-468e-419f-afcf-4f00fa9b8173", 00:07:26.942 "strip_size_kb": 64, 00:07:26.942 "state": "online", 00:07:26.942 "raid_level": "concat", 00:07:26.942 "superblock": false, 00:07:26.942 "num_base_bdevs": 2, 00:07:26.942 "num_base_bdevs_discovered": 2, 00:07:26.942 "num_base_bdevs_operational": 2, 00:07:26.942 "base_bdevs_list": [ 00:07:26.942 { 00:07:26.942 "name": "BaseBdev1", 00:07:26.942 "uuid": "036b1cf3-3fff-4035-bb85-319693b55a7e", 00:07:26.942 "is_configured": true, 00:07:26.942 "data_offset": 0, 00:07:26.942 "data_size": 65536 00:07:26.942 }, 00:07:26.942 { 00:07:26.942 "name": "BaseBdev2", 00:07:26.942 "uuid": "186606f6-706a-488a-bd0f-ad480235fc2c", 00:07:26.942 "is_configured": true, 00:07:26.942 "data_offset": 0, 00:07:26.942 "data_size": 65536 00:07:26.942 } 00:07:26.942 ] 00:07:26.942 } 00:07:26.942 } 00:07:26.942 }' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:26.942 BaseBdev2' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.942 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 [2024-11-26 15:23:25.379069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.943 [2024-11-26 15:23:25.379098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.943 [2024-11-26 15:23:25.379144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.943 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.203 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.203 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.203 "name": "Existed_Raid", 00:07:27.203 "uuid": "25d3cc6b-468e-419f-afcf-4f00fa9b8173", 00:07:27.203 "strip_size_kb": 64, 00:07:27.203 "state": "offline", 00:07:27.203 "raid_level": "concat", 00:07:27.203 "superblock": false, 00:07:27.203 "num_base_bdevs": 2, 00:07:27.203 "num_base_bdevs_discovered": 1, 00:07:27.203 "num_base_bdevs_operational": 1, 00:07:27.203 "base_bdevs_list": [ 00:07:27.203 { 00:07:27.203 "name": null, 00:07:27.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.203 "is_configured": false, 00:07:27.204 "data_offset": 0, 00:07:27.204 "data_size": 65536 00:07:27.204 }, 00:07:27.204 { 00:07:27.204 "name": "BaseBdev2", 00:07:27.204 "uuid": "186606f6-706a-488a-bd0f-ad480235fc2c", 00:07:27.204 "is_configured": true, 00:07:27.204 "data_offset": 0, 00:07:27.204 "data_size": 65536 00:07:27.204 } 00:07:27.204 ] 00:07:27.204 }' 00:07:27.204 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.204 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.464 [2024-11-26 15:23:25.862200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.464 [2024-11-26 15:23:25.862251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74657 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74657 ']' 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74657 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.464 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74657 00:07:27.725 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.725 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.725 killing process with pid 74657 00:07:27.725 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74657' 00:07:27.725 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74657 00:07:27.725 [2024-11-26 15:23:25.952223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.725 15:23:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74657 00:07:27.725 [2024-11-26 15:23:25.953168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.725 15:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:27.725 00:07:27.725 real 0m3.761s 00:07:27.725 user 0m5.943s 00:07:27.725 sys 0m0.729s 00:07:27.725 15:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.725 15:23:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.725 ************************************ 00:07:27.725 END TEST raid_state_function_test 00:07:27.725 ************************************ 00:07:27.984 15:23:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:27.984 15:23:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.984 15:23:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.984 15:23:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.984 ************************************ 00:07:27.984 START TEST raid_state_function_test_sb 00:07:27.984 ************************************ 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:27.984 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74893 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74893' 00:07:27.985 Process raid pid: 74893 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74893 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74893 ']' 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.985 15:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.985 [2024-11-26 15:23:26.335378] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:27.985 [2024-11-26 15:23:26.335809] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.244 [2024-11-26 15:23:26.471326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:28.244 [2024-11-26 15:23:26.507362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.244 [2024-11-26 15:23:26.531850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.244 [2024-11-26 15:23:26.574201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.244 [2024-11-26 15:23:26.574236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.815 [2024-11-26 15:23:27.160803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.815 [2024-11-26 15:23:27.160861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.815 [2024-11-26 15:23:27.160874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.815 [2024-11-26 15:23:27.160881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.815 "name": "Existed_Raid", 00:07:28.815 "uuid": "0579be6b-4eb8-438a-878e-720f2b105742", 00:07:28.815 "strip_size_kb": 64, 00:07:28.815 "state": "configuring", 00:07:28.815 "raid_level": "concat", 00:07:28.815 "superblock": true, 00:07:28.815 "num_base_bdevs": 2, 00:07:28.815 "num_base_bdevs_discovered": 0, 00:07:28.815 "num_base_bdevs_operational": 2, 00:07:28.815 "base_bdevs_list": [ 00:07:28.815 { 00:07:28.815 "name": "BaseBdev1", 00:07:28.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.815 "is_configured": false, 00:07:28.815 "data_offset": 0, 00:07:28.815 "data_size": 0 00:07:28.815 }, 00:07:28.815 { 00:07:28.815 "name": "BaseBdev2", 00:07:28.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.815 "is_configured": false, 00:07:28.815 "data_offset": 0, 00:07:28.815 "data_size": 0 00:07:28.815 } 00:07:28.815 ] 00:07:28.815 }' 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.815 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.386 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.386 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.386 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.386 [2024-11-26 15:23:27.588809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.386 [2024-11-26 15:23:27.588844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:29.386 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.386 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.387 [2024-11-26 15:23:27.600837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.387 [2024-11-26 15:23:27.600872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.387 [2024-11-26 15:23:27.600883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.387 [2024-11-26 15:23:27.600890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.387 [2024-11-26 15:23:27.621599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.387 BaseBdev1 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.387 [ 00:07:29.387 { 00:07:29.387 "name": "BaseBdev1", 00:07:29.387 "aliases": [ 00:07:29.387 "f40ed6fd-9149-4dd2-8873-f3ce36496fe8" 00:07:29.387 ], 00:07:29.387 "product_name": "Malloc disk", 00:07:29.387 "block_size": 512, 00:07:29.387 "num_blocks": 65536, 00:07:29.387 "uuid": "f40ed6fd-9149-4dd2-8873-f3ce36496fe8", 00:07:29.387 "assigned_rate_limits": { 00:07:29.387 "rw_ios_per_sec": 0, 00:07:29.387 "rw_mbytes_per_sec": 0, 00:07:29.387 "r_mbytes_per_sec": 0, 00:07:29.387 "w_mbytes_per_sec": 0 00:07:29.387 }, 00:07:29.387 "claimed": true, 00:07:29.387 "claim_type": "exclusive_write", 00:07:29.387 "zoned": false, 00:07:29.387 "supported_io_types": { 00:07:29.387 "read": true, 00:07:29.387 "write": true, 00:07:29.387 "unmap": true, 00:07:29.387 "flush": true, 00:07:29.387 "reset": true, 00:07:29.387 "nvme_admin": false, 00:07:29.387 "nvme_io": false, 00:07:29.387 "nvme_io_md": false, 00:07:29.387 "write_zeroes": true, 00:07:29.387 "zcopy": true, 00:07:29.387 "get_zone_info": false, 00:07:29.387 "zone_management": false, 00:07:29.387 "zone_append": false, 00:07:29.387 "compare": false, 00:07:29.387 "compare_and_write": false, 00:07:29.387 "abort": true, 00:07:29.387 "seek_hole": false, 00:07:29.387 "seek_data": false, 00:07:29.387 "copy": true, 00:07:29.387 "nvme_iov_md": false 00:07:29.387 }, 00:07:29.387 "memory_domains": [ 00:07:29.387 { 00:07:29.387 "dma_device_id": "system", 00:07:29.387 "dma_device_type": 1 00:07:29.387 }, 00:07:29.387 { 00:07:29.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.387 "dma_device_type": 2 00:07:29.387 } 00:07:29.387 ], 00:07:29.387 "driver_specific": {} 00:07:29.387 } 00:07:29.387 ] 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.387 "name": "Existed_Raid", 00:07:29.387 "uuid": "62766a94-9a7e-49bb-8a49-3dd801163097", 00:07:29.387 "strip_size_kb": 64, 00:07:29.387 "state": "configuring", 00:07:29.387 "raid_level": "concat", 00:07:29.387 "superblock": true, 00:07:29.387 "num_base_bdevs": 2, 00:07:29.387 "num_base_bdevs_discovered": 1, 00:07:29.387 "num_base_bdevs_operational": 2, 00:07:29.387 "base_bdevs_list": [ 00:07:29.387 { 00:07:29.387 "name": "BaseBdev1", 00:07:29.387 "uuid": "f40ed6fd-9149-4dd2-8873-f3ce36496fe8", 00:07:29.387 "is_configured": true, 00:07:29.387 "data_offset": 2048, 00:07:29.387 "data_size": 63488 00:07:29.387 }, 00:07:29.387 { 00:07:29.387 "name": "BaseBdev2", 00:07:29.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.387 "is_configured": false, 00:07:29.387 "data_offset": 0, 00:07:29.387 "data_size": 0 00:07:29.387 } 00:07:29.387 ] 00:07:29.387 }' 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.387 15:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.648 [2024-11-26 15:23:28.041730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.648 [2024-11-26 15:23:28.041780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.648 [2024-11-26 15:23:28.053778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.648 [2024-11-26 15:23:28.055573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.648 [2024-11-26 15:23:28.055608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.648 "name": "Existed_Raid", 00:07:29.648 "uuid": "77c0cff4-af79-4652-86f5-c339fa59ab35", 00:07:29.648 "strip_size_kb": 64, 00:07:29.648 "state": "configuring", 00:07:29.648 "raid_level": "concat", 00:07:29.648 "superblock": true, 00:07:29.648 "num_base_bdevs": 2, 00:07:29.648 "num_base_bdevs_discovered": 1, 00:07:29.648 "num_base_bdevs_operational": 2, 00:07:29.648 "base_bdevs_list": [ 00:07:29.648 { 00:07:29.648 "name": "BaseBdev1", 00:07:29.648 "uuid": "f40ed6fd-9149-4dd2-8873-f3ce36496fe8", 00:07:29.648 "is_configured": true, 00:07:29.648 "data_offset": 2048, 00:07:29.648 "data_size": 63488 00:07:29.648 }, 00:07:29.648 { 00:07:29.648 "name": "BaseBdev2", 00:07:29.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.648 "is_configured": false, 00:07:29.648 "data_offset": 0, 00:07:29.648 "data_size": 0 00:07:29.648 } 00:07:29.648 ] 00:07:29.648 }' 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.648 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.219 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.219 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.219 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.219 [2024-11-26 15:23:28.492861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.219 [2024-11-26 15:23:28.493038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:30.219 [2024-11-26 15:23:28.493054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.219 [2024-11-26 15:23:28.493354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:30.220 [2024-11-26 15:23:28.493511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:30.220 [2024-11-26 15:23:28.493526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:30.220 BaseBdev2 00:07:30.220 [2024-11-26 15:23:28.493652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.220 [ 00:07:30.220 { 00:07:30.220 "name": "BaseBdev2", 00:07:30.220 "aliases": [ 00:07:30.220 "8243371b-a5c2-4201-b92f-e68477863287" 00:07:30.220 ], 00:07:30.220 "product_name": "Malloc disk", 00:07:30.220 "block_size": 512, 00:07:30.220 "num_blocks": 65536, 00:07:30.220 "uuid": "8243371b-a5c2-4201-b92f-e68477863287", 00:07:30.220 "assigned_rate_limits": { 00:07:30.220 "rw_ios_per_sec": 0, 00:07:30.220 "rw_mbytes_per_sec": 0, 00:07:30.220 "r_mbytes_per_sec": 0, 00:07:30.220 "w_mbytes_per_sec": 0 00:07:30.220 }, 00:07:30.220 "claimed": true, 00:07:30.220 "claim_type": "exclusive_write", 00:07:30.220 "zoned": false, 00:07:30.220 "supported_io_types": { 00:07:30.220 "read": true, 00:07:30.220 "write": true, 00:07:30.220 "unmap": true, 00:07:30.220 "flush": true, 00:07:30.220 "reset": true, 00:07:30.220 "nvme_admin": false, 00:07:30.220 "nvme_io": false, 00:07:30.220 "nvme_io_md": false, 00:07:30.220 "write_zeroes": true, 00:07:30.220 "zcopy": true, 00:07:30.220 "get_zone_info": false, 00:07:30.220 "zone_management": false, 00:07:30.220 "zone_append": false, 00:07:30.220 "compare": false, 00:07:30.220 "compare_and_write": false, 00:07:30.220 "abort": true, 00:07:30.220 "seek_hole": false, 00:07:30.220 "seek_data": false, 00:07:30.220 "copy": true, 00:07:30.220 "nvme_iov_md": false 00:07:30.220 }, 00:07:30.220 "memory_domains": [ 00:07:30.220 { 00:07:30.220 "dma_device_id": "system", 00:07:30.220 "dma_device_type": 1 00:07:30.220 }, 00:07:30.220 { 00:07:30.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.220 "dma_device_type": 2 00:07:30.220 } 00:07:30.220 ], 00:07:30.220 "driver_specific": {} 00:07:30.220 } 00:07:30.220 ] 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.220 "name": "Existed_Raid", 00:07:30.220 "uuid": "77c0cff4-af79-4652-86f5-c339fa59ab35", 00:07:30.220 "strip_size_kb": 64, 00:07:30.220 "state": "online", 00:07:30.220 "raid_level": "concat", 00:07:30.220 "superblock": true, 00:07:30.220 "num_base_bdevs": 2, 00:07:30.220 "num_base_bdevs_discovered": 2, 00:07:30.220 "num_base_bdevs_operational": 2, 00:07:30.220 "base_bdevs_list": [ 00:07:30.220 { 00:07:30.220 "name": "BaseBdev1", 00:07:30.220 "uuid": "f40ed6fd-9149-4dd2-8873-f3ce36496fe8", 00:07:30.220 "is_configured": true, 00:07:30.220 "data_offset": 2048, 00:07:30.220 "data_size": 63488 00:07:30.220 }, 00:07:30.220 { 00:07:30.220 "name": "BaseBdev2", 00:07:30.220 "uuid": "8243371b-a5c2-4201-b92f-e68477863287", 00:07:30.220 "is_configured": true, 00:07:30.220 "data_offset": 2048, 00:07:30.220 "data_size": 63488 00:07:30.220 } 00:07:30.220 ] 00:07:30.220 }' 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.220 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.812 [2024-11-26 15:23:28.981303] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.812 15:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.812 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.812 "name": "Existed_Raid", 00:07:30.812 "aliases": [ 00:07:30.812 "77c0cff4-af79-4652-86f5-c339fa59ab35" 00:07:30.812 ], 00:07:30.812 "product_name": "Raid Volume", 00:07:30.812 "block_size": 512, 00:07:30.812 "num_blocks": 126976, 00:07:30.812 "uuid": "77c0cff4-af79-4652-86f5-c339fa59ab35", 00:07:30.812 "assigned_rate_limits": { 00:07:30.812 "rw_ios_per_sec": 0, 00:07:30.812 "rw_mbytes_per_sec": 0, 00:07:30.812 "r_mbytes_per_sec": 0, 00:07:30.812 "w_mbytes_per_sec": 0 00:07:30.812 }, 00:07:30.812 "claimed": false, 00:07:30.812 "zoned": false, 00:07:30.812 "supported_io_types": { 00:07:30.812 "read": true, 00:07:30.812 "write": true, 00:07:30.812 "unmap": true, 00:07:30.812 "flush": true, 00:07:30.812 "reset": true, 00:07:30.812 "nvme_admin": false, 00:07:30.812 "nvme_io": false, 00:07:30.813 "nvme_io_md": false, 00:07:30.813 "write_zeroes": true, 00:07:30.813 "zcopy": false, 00:07:30.813 "get_zone_info": false, 00:07:30.813 "zone_management": false, 00:07:30.813 "zone_append": false, 00:07:30.813 "compare": false, 00:07:30.813 "compare_and_write": false, 00:07:30.813 "abort": false, 00:07:30.813 "seek_hole": false, 00:07:30.813 "seek_data": false, 00:07:30.813 "copy": false, 00:07:30.813 "nvme_iov_md": false 00:07:30.813 }, 00:07:30.813 "memory_domains": [ 00:07:30.813 { 00:07:30.813 "dma_device_id": "system", 00:07:30.813 "dma_device_type": 1 00:07:30.813 }, 00:07:30.813 { 00:07:30.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.813 "dma_device_type": 2 00:07:30.813 }, 00:07:30.813 { 00:07:30.813 "dma_device_id": "system", 00:07:30.813 "dma_device_type": 1 00:07:30.813 }, 00:07:30.813 { 00:07:30.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.813 "dma_device_type": 2 00:07:30.813 } 00:07:30.813 ], 00:07:30.813 "driver_specific": { 00:07:30.813 "raid": { 00:07:30.813 "uuid": "77c0cff4-af79-4652-86f5-c339fa59ab35", 00:07:30.813 "strip_size_kb": 64, 00:07:30.813 "state": "online", 00:07:30.813 "raid_level": "concat", 00:07:30.813 "superblock": true, 00:07:30.813 "num_base_bdevs": 2, 00:07:30.813 "num_base_bdevs_discovered": 2, 00:07:30.813 "num_base_bdevs_operational": 2, 00:07:30.813 "base_bdevs_list": [ 00:07:30.813 { 00:07:30.813 "name": "BaseBdev1", 00:07:30.813 "uuid": "f40ed6fd-9149-4dd2-8873-f3ce36496fe8", 00:07:30.813 "is_configured": true, 00:07:30.813 "data_offset": 2048, 00:07:30.813 "data_size": 63488 00:07:30.813 }, 00:07:30.813 { 00:07:30.813 "name": "BaseBdev2", 00:07:30.813 "uuid": "8243371b-a5c2-4201-b92f-e68477863287", 00:07:30.813 "is_configured": true, 00:07:30.813 "data_offset": 2048, 00:07:30.813 "data_size": 63488 00:07:30.813 } 00:07:30.813 ] 00:07:30.813 } 00:07:30.813 } 00:07:30.813 }' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:30.813 BaseBdev2' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.813 [2024-11-26 15:23:29.185136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.813 [2024-11-26 15:23:29.185221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.813 [2024-11-26 15:23:29.185328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.813 "name": "Existed_Raid", 00:07:30.813 "uuid": "77c0cff4-af79-4652-86f5-c339fa59ab35", 00:07:30.813 "strip_size_kb": 64, 00:07:30.813 "state": "offline", 00:07:30.813 "raid_level": "concat", 00:07:30.813 "superblock": true, 00:07:30.813 "num_base_bdevs": 2, 00:07:30.813 "num_base_bdevs_discovered": 1, 00:07:30.813 "num_base_bdevs_operational": 1, 00:07:30.813 "base_bdevs_list": [ 00:07:30.813 { 00:07:30.813 "name": null, 00:07:30.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.813 "is_configured": false, 00:07:30.813 "data_offset": 0, 00:07:30.813 "data_size": 63488 00:07:30.813 }, 00:07:30.813 { 00:07:30.813 "name": "BaseBdev2", 00:07:30.813 "uuid": "8243371b-a5c2-4201-b92f-e68477863287", 00:07:30.813 "is_configured": true, 00:07:30.813 "data_offset": 2048, 00:07:30.813 "data_size": 63488 00:07:30.813 } 00:07:30.813 ] 00:07:30.813 }' 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.813 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.384 [2024-11-26 15:23:29.700706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.384 [2024-11-26 15:23:29.700766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74893 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74893 ']' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74893 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74893 00:07:31.384 killing process with pid 74893 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74893' 00:07:31.384 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74893 00:07:31.385 [2024-11-26 15:23:29.793134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.385 15:23:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74893 00:07:31.385 [2024-11-26 15:23:29.794103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.645 ************************************ 00:07:31.645 END TEST raid_state_function_test_sb 00:07:31.645 ************************************ 00:07:31.645 15:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.645 00:07:31.645 real 0m3.766s 00:07:31.645 user 0m5.943s 00:07:31.645 sys 0m0.751s 00:07:31.645 15:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.645 15:23:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 15:23:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:31.645 15:23:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:31.645 15:23:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.645 15:23:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 ************************************ 00:07:31.645 START TEST raid_superblock_test 00:07:31.645 ************************************ 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75134 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75134 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75134 ']' 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.645 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.904 [2024-11-26 15:23:30.166958] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:31.904 [2024-11-26 15:23:30.167576] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75134 ] 00:07:31.904 [2024-11-26 15:23:30.301403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.904 [2024-11-26 15:23:30.338869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.904 [2024-11-26 15:23:30.364110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.163 [2024-11-26 15:23:30.406735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.163 [2024-11-26 15:23:30.406841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 malloc1 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.735 15:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 [2024-11-26 15:23:30.998081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.735 [2024-11-26 15:23:30.998211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.735 [2024-11-26 15:23:30.998260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:32.735 [2024-11-26 15:23:30.998321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.735 [2024-11-26 15:23:31.000394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.735 [2024-11-26 15:23:31.000455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.735 pt1 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 malloc2 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 [2024-11-26 15:23:31.030826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.735 [2024-11-26 15:23:31.030879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.735 [2024-11-26 15:23:31.030898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:32.735 [2024-11-26 15:23:31.030906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.735 [2024-11-26 15:23:31.032962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.735 [2024-11-26 15:23:31.033048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.735 pt2 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 [2024-11-26 15:23:31.042866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.735 [2024-11-26 15:23:31.044759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.735 [2024-11-26 15:23:31.044911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:32.735 [2024-11-26 15:23:31.044927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.735 [2024-11-26 15:23:31.045179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:32.735 [2024-11-26 15:23:31.045307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:32.735 [2024-11-26 15:23:31.045327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:32.735 [2024-11-26 15:23:31.045435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.735 "name": "raid_bdev1", 00:07:32.735 "uuid": "9e206f41-75c2-4a91-bee5-90f3311bbb41", 00:07:32.735 "strip_size_kb": 64, 00:07:32.735 "state": "online", 00:07:32.735 "raid_level": "concat", 00:07:32.735 "superblock": true, 00:07:32.735 "num_base_bdevs": 2, 00:07:32.735 "num_base_bdevs_discovered": 2, 00:07:32.735 "num_base_bdevs_operational": 2, 00:07:32.735 "base_bdevs_list": [ 00:07:32.735 { 00:07:32.735 "name": "pt1", 00:07:32.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.735 "is_configured": true, 00:07:32.735 "data_offset": 2048, 00:07:32.735 "data_size": 63488 00:07:32.735 }, 00:07:32.735 { 00:07:32.735 "name": "pt2", 00:07:32.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.735 "is_configured": true, 00:07:32.735 "data_offset": 2048, 00:07:32.735 "data_size": 63488 00:07:32.735 } 00:07:32.735 ] 00:07:32.735 }' 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.735 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.996 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.996 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.996 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.996 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.996 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.257 [2024-11-26 15:23:31.479268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.257 "name": "raid_bdev1", 00:07:33.257 "aliases": [ 00:07:33.257 "9e206f41-75c2-4a91-bee5-90f3311bbb41" 00:07:33.257 ], 00:07:33.257 "product_name": "Raid Volume", 00:07:33.257 "block_size": 512, 00:07:33.257 "num_blocks": 126976, 00:07:33.257 "uuid": "9e206f41-75c2-4a91-bee5-90f3311bbb41", 00:07:33.257 "assigned_rate_limits": { 00:07:33.257 "rw_ios_per_sec": 0, 00:07:33.257 "rw_mbytes_per_sec": 0, 00:07:33.257 "r_mbytes_per_sec": 0, 00:07:33.257 "w_mbytes_per_sec": 0 00:07:33.257 }, 00:07:33.257 "claimed": false, 00:07:33.257 "zoned": false, 00:07:33.257 "supported_io_types": { 00:07:33.257 "read": true, 00:07:33.257 "write": true, 00:07:33.257 "unmap": true, 00:07:33.257 "flush": true, 00:07:33.257 "reset": true, 00:07:33.257 "nvme_admin": false, 00:07:33.257 "nvme_io": false, 00:07:33.257 "nvme_io_md": false, 00:07:33.257 "write_zeroes": true, 00:07:33.257 "zcopy": false, 00:07:33.257 "get_zone_info": false, 00:07:33.257 "zone_management": false, 00:07:33.257 "zone_append": false, 00:07:33.257 "compare": false, 00:07:33.257 "compare_and_write": false, 00:07:33.257 "abort": false, 00:07:33.257 "seek_hole": false, 00:07:33.257 "seek_data": false, 00:07:33.257 "copy": false, 00:07:33.257 "nvme_iov_md": false 00:07:33.257 }, 00:07:33.257 "memory_domains": [ 00:07:33.257 { 00:07:33.257 "dma_device_id": "system", 00:07:33.257 "dma_device_type": 1 00:07:33.257 }, 00:07:33.257 { 00:07:33.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.257 "dma_device_type": 2 00:07:33.257 }, 00:07:33.257 { 00:07:33.257 "dma_device_id": "system", 00:07:33.257 "dma_device_type": 1 00:07:33.257 }, 00:07:33.257 { 00:07:33.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.257 "dma_device_type": 2 00:07:33.257 } 00:07:33.257 ], 00:07:33.257 "driver_specific": { 00:07:33.257 "raid": { 00:07:33.257 "uuid": "9e206f41-75c2-4a91-bee5-90f3311bbb41", 00:07:33.257 "strip_size_kb": 64, 00:07:33.257 "state": "online", 00:07:33.257 "raid_level": "concat", 00:07:33.257 "superblock": true, 00:07:33.257 "num_base_bdevs": 2, 00:07:33.257 "num_base_bdevs_discovered": 2, 00:07:33.257 "num_base_bdevs_operational": 2, 00:07:33.257 "base_bdevs_list": [ 00:07:33.257 { 00:07:33.257 "name": "pt1", 00:07:33.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.257 "is_configured": true, 00:07:33.257 "data_offset": 2048, 00:07:33.257 "data_size": 63488 00:07:33.257 }, 00:07:33.257 { 00:07:33.257 "name": "pt2", 00:07:33.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.257 "is_configured": true, 00:07:33.257 "data_offset": 2048, 00:07:33.257 "data_size": 63488 00:07:33.257 } 00:07:33.257 ] 00:07:33.257 } 00:07:33.257 } 00:07:33.257 }' 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.257 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.257 pt2' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:33.258 [2024-11-26 15:23:31.675207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e206f41-75c2-4a91-bee5-90f3311bbb41 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e206f41-75c2-4a91-bee5-90f3311bbb41 ']' 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.258 [2024-11-26 15:23:31.722987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.258 [2024-11-26 15:23:31.723051] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.258 [2024-11-26 15:23:31.723148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.258 [2024-11-26 15:23:31.723243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.258 [2024-11-26 15:23:31.723294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:33.258 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.519 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 [2024-11-26 15:23:31.847056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:33.520 [2024-11-26 15:23:31.848956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:33.520 [2024-11-26 15:23:31.849056] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:33.520 [2024-11-26 15:23:31.849150] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:33.520 [2024-11-26 15:23:31.849233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.520 [2024-11-26 15:23:31.849269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:07:33.520 request: 00:07:33.520 { 00:07:33.520 "name": "raid_bdev1", 00:07:33.520 "raid_level": "concat", 00:07:33.520 "base_bdevs": [ 00:07:33.520 "malloc1", 00:07:33.520 "malloc2" 00:07:33.520 ], 00:07:33.520 "strip_size_kb": 64, 00:07:33.520 "superblock": false, 00:07:33.520 "method": "bdev_raid_create", 00:07:33.520 "req_id": 1 00:07:33.520 } 00:07:33.520 Got JSON-RPC error response 00:07:33.520 response: 00:07:33.520 { 00:07:33.520 "code": -17, 00:07:33.520 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:33.520 } 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 [2024-11-26 15:23:31.899042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:33.520 [2024-11-26 15:23:31.899128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.520 [2024-11-26 15:23:31.899159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:33.520 [2024-11-26 15:23:31.899200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.520 [2024-11-26 15:23:31.901241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.520 [2024-11-26 15:23:31.901312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:33.520 [2024-11-26 15:23:31.901390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:33.520 [2024-11-26 15:23:31.901453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:33.520 pt1 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.520 "name": "raid_bdev1", 00:07:33.520 "uuid": "9e206f41-75c2-4a91-bee5-90f3311bbb41", 00:07:33.520 "strip_size_kb": 64, 00:07:33.520 "state": "configuring", 00:07:33.520 "raid_level": "concat", 00:07:33.520 "superblock": true, 00:07:33.520 "num_base_bdevs": 2, 00:07:33.520 "num_base_bdevs_discovered": 1, 00:07:33.520 "num_base_bdevs_operational": 2, 00:07:33.520 "base_bdevs_list": [ 00:07:33.520 { 00:07:33.520 "name": "pt1", 00:07:33.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.520 "is_configured": true, 00:07:33.520 "data_offset": 2048, 00:07:33.520 "data_size": 63488 00:07:33.520 }, 00:07:33.520 { 00:07:33.520 "name": null, 00:07:33.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.520 "is_configured": false, 00:07:33.520 "data_offset": 2048, 00:07:33.520 "data_size": 63488 00:07:33.520 } 00:07:33.520 ] 00:07:33.520 }' 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.520 15:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.091 [2024-11-26 15:23:32.299165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:34.091 [2024-11-26 15:23:32.299235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.091 [2024-11-26 15:23:32.299255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:34.091 [2024-11-26 15:23:32.299266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.091 [2024-11-26 15:23:32.299635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.091 [2024-11-26 15:23:32.299654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:34.091 [2024-11-26 15:23:32.299716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:34.091 [2024-11-26 15:23:32.299735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:34.091 [2024-11-26 15:23:32.299811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:34.091 [2024-11-26 15:23:32.299822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.091 [2024-11-26 15:23:32.300041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:34.091 [2024-11-26 15:23:32.300150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:34.091 [2024-11-26 15:23:32.300159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:34.091 [2024-11-26 15:23:32.300271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.091 pt2 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.091 "name": "raid_bdev1", 00:07:34.091 "uuid": "9e206f41-75c2-4a91-bee5-90f3311bbb41", 00:07:34.091 "strip_size_kb": 64, 00:07:34.091 "state": "online", 00:07:34.091 "raid_level": "concat", 00:07:34.091 "superblock": true, 00:07:34.091 "num_base_bdevs": 2, 00:07:34.091 "num_base_bdevs_discovered": 2, 00:07:34.091 "num_base_bdevs_operational": 2, 00:07:34.091 "base_bdevs_list": [ 00:07:34.091 { 00:07:34.091 "name": "pt1", 00:07:34.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.091 "is_configured": true, 00:07:34.091 "data_offset": 2048, 00:07:34.091 "data_size": 63488 00:07:34.091 }, 00:07:34.091 { 00:07:34.091 "name": "pt2", 00:07:34.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.091 "is_configured": true, 00:07:34.091 "data_offset": 2048, 00:07:34.091 "data_size": 63488 00:07:34.091 } 00:07:34.091 ] 00:07:34.091 }' 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.091 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.352 [2024-11-26 15:23:32.711521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.352 "name": "raid_bdev1", 00:07:34.352 "aliases": [ 00:07:34.352 "9e206f41-75c2-4a91-bee5-90f3311bbb41" 00:07:34.352 ], 00:07:34.352 "product_name": "Raid Volume", 00:07:34.352 "block_size": 512, 00:07:34.352 "num_blocks": 126976, 00:07:34.352 "uuid": "9e206f41-75c2-4a91-bee5-90f3311bbb41", 00:07:34.352 "assigned_rate_limits": { 00:07:34.352 "rw_ios_per_sec": 0, 00:07:34.352 "rw_mbytes_per_sec": 0, 00:07:34.352 "r_mbytes_per_sec": 0, 00:07:34.352 "w_mbytes_per_sec": 0 00:07:34.352 }, 00:07:34.352 "claimed": false, 00:07:34.352 "zoned": false, 00:07:34.352 "supported_io_types": { 00:07:34.352 "read": true, 00:07:34.352 "write": true, 00:07:34.352 "unmap": true, 00:07:34.352 "flush": true, 00:07:34.352 "reset": true, 00:07:34.352 "nvme_admin": false, 00:07:34.352 "nvme_io": false, 00:07:34.352 "nvme_io_md": false, 00:07:34.352 "write_zeroes": true, 00:07:34.352 "zcopy": false, 00:07:34.352 "get_zone_info": false, 00:07:34.352 "zone_management": false, 00:07:34.352 "zone_append": false, 00:07:34.352 "compare": false, 00:07:34.352 "compare_and_write": false, 00:07:34.352 "abort": false, 00:07:34.352 "seek_hole": false, 00:07:34.352 "seek_data": false, 00:07:34.352 "copy": false, 00:07:34.352 "nvme_iov_md": false 00:07:34.352 }, 00:07:34.352 "memory_domains": [ 00:07:34.352 { 00:07:34.352 "dma_device_id": "system", 00:07:34.352 "dma_device_type": 1 00:07:34.352 }, 00:07:34.352 { 00:07:34.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.352 "dma_device_type": 2 00:07:34.352 }, 00:07:34.352 { 00:07:34.352 "dma_device_id": "system", 00:07:34.352 "dma_device_type": 1 00:07:34.352 }, 00:07:34.352 { 00:07:34.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.352 "dma_device_type": 2 00:07:34.352 } 00:07:34.352 ], 00:07:34.352 "driver_specific": { 00:07:34.352 "raid": { 00:07:34.352 "uuid": "9e206f41-75c2-4a91-bee5-90f3311bbb41", 00:07:34.352 "strip_size_kb": 64, 00:07:34.352 "state": "online", 00:07:34.352 "raid_level": "concat", 00:07:34.352 "superblock": true, 00:07:34.352 "num_base_bdevs": 2, 00:07:34.352 "num_base_bdevs_discovered": 2, 00:07:34.352 "num_base_bdevs_operational": 2, 00:07:34.352 "base_bdevs_list": [ 00:07:34.352 { 00:07:34.352 "name": "pt1", 00:07:34.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.352 "is_configured": true, 00:07:34.352 "data_offset": 2048, 00:07:34.352 "data_size": 63488 00:07:34.352 }, 00:07:34.352 { 00:07:34.352 "name": "pt2", 00:07:34.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.352 "is_configured": true, 00:07:34.352 "data_offset": 2048, 00:07:34.352 "data_size": 63488 00:07:34.352 } 00:07:34.352 ] 00:07:34.352 } 00:07:34.352 } 00:07:34.352 }' 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:34.352 pt2' 00:07:34.352 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:34.613 [2024-11-26 15:23:32.919561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e206f41-75c2-4a91-bee5-90f3311bbb41 '!=' 9e206f41-75c2-4a91-bee5-90f3311bbb41 ']' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75134 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75134 ']' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75134 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.613 15:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75134 00:07:34.613 15:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.613 15:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.613 killing process with pid 75134 00:07:34.613 15:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75134' 00:07:34.613 15:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75134 00:07:34.613 [2024-11-26 15:23:33.004328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.613 [2024-11-26 15:23:33.004404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.613 [2024-11-26 15:23:33.004451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.613 [2024-11-26 15:23:33.004462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:34.613 15:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75134 00:07:34.613 [2024-11-26 15:23:33.026270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.874 ************************************ 00:07:34.874 END TEST raid_superblock_test 00:07:34.874 ************************************ 00:07:34.874 15:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.874 00:07:34.874 real 0m3.157s 00:07:34.874 user 0m4.835s 00:07:34.874 sys 0m0.680s 00:07:34.874 15:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.874 15:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.874 15:23:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:34.874 15:23:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.874 15:23:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.874 15:23:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.874 ************************************ 00:07:34.874 START TEST raid_read_error_test 00:07:34.874 ************************************ 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iLoMDi8PqL 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75329 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75329 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75329 ']' 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.874 15:23:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.135 [2024-11-26 15:23:33.411087] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:35.135 [2024-11-26 15:23:33.411334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75329 ] 00:07:35.135 [2024-11-26 15:23:33.544854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.135 [2024-11-26 15:23:33.582752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.135 [2024-11-26 15:23:33.607023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.395 [2024-11-26 15:23:33.650429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.395 [2024-11-26 15:23:33.650509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 BaseBdev1_malloc 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 true 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 [2024-11-26 15:23:34.258167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.968 [2024-11-26 15:23:34.258294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.968 [2024-11-26 15:23:34.258331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.968 [2024-11-26 15:23:34.258385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.968 [2024-11-26 15:23:34.260550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.968 [2024-11-26 15:23:34.260618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.968 BaseBdev1 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 BaseBdev2_malloc 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 true 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 [2024-11-26 15:23:34.286885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.968 [2024-11-26 15:23:34.286986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.968 [2024-11-26 15:23:34.287017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.968 [2024-11-26 15:23:34.287046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.968 [2024-11-26 15:23:34.289081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.968 [2024-11-26 15:23:34.289157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.968 BaseBdev2 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 [2024-11-26 15:23:34.294915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.968 [2024-11-26 15:23:34.296754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.968 [2024-11-26 15:23:34.296912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:35.968 [2024-11-26 15:23:34.296926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.968 [2024-11-26 15:23:34.297166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:35.968 [2024-11-26 15:23:34.297322] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:35.968 [2024-11-26 15:23:34.297338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:35.968 [2024-11-26 15:23:34.297458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.968 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.968 "name": "raid_bdev1", 00:07:35.968 "uuid": "284b9efc-28a6-41cb-82e9-b7b8f60f3177", 00:07:35.968 "strip_size_kb": 64, 00:07:35.968 "state": "online", 00:07:35.968 "raid_level": "concat", 00:07:35.968 "superblock": true, 00:07:35.968 "num_base_bdevs": 2, 00:07:35.968 "num_base_bdevs_discovered": 2, 00:07:35.968 "num_base_bdevs_operational": 2, 00:07:35.968 "base_bdevs_list": [ 00:07:35.968 { 00:07:35.968 "name": "BaseBdev1", 00:07:35.969 "uuid": "4f0d372a-4a9a-5a78-b2c5-11267ef5c6fa", 00:07:35.969 "is_configured": true, 00:07:35.969 "data_offset": 2048, 00:07:35.969 "data_size": 63488 00:07:35.969 }, 00:07:35.969 { 00:07:35.969 "name": "BaseBdev2", 00:07:35.969 "uuid": "2f05b94c-0d0c-5259-bd76-68df99157ee5", 00:07:35.969 "is_configured": true, 00:07:35.969 "data_offset": 2048, 00:07:35.969 "data_size": 63488 00:07:35.969 } 00:07:35.969 ] 00:07:35.969 }' 00:07:35.969 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.969 15:23:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.539 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.539 15:23:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.539 [2024-11-26 15:23:34.795390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.479 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.479 "name": "raid_bdev1", 00:07:37.479 "uuid": "284b9efc-28a6-41cb-82e9-b7b8f60f3177", 00:07:37.479 "strip_size_kb": 64, 00:07:37.479 "state": "online", 00:07:37.479 "raid_level": "concat", 00:07:37.479 "superblock": true, 00:07:37.479 "num_base_bdevs": 2, 00:07:37.479 "num_base_bdevs_discovered": 2, 00:07:37.479 "num_base_bdevs_operational": 2, 00:07:37.479 "base_bdevs_list": [ 00:07:37.479 { 00:07:37.480 "name": "BaseBdev1", 00:07:37.480 "uuid": "4f0d372a-4a9a-5a78-b2c5-11267ef5c6fa", 00:07:37.480 "is_configured": true, 00:07:37.480 "data_offset": 2048, 00:07:37.480 "data_size": 63488 00:07:37.480 }, 00:07:37.480 { 00:07:37.480 "name": "BaseBdev2", 00:07:37.480 "uuid": "2f05b94c-0d0c-5259-bd76-68df99157ee5", 00:07:37.480 "is_configured": true, 00:07:37.480 "data_offset": 2048, 00:07:37.480 "data_size": 63488 00:07:37.480 } 00:07:37.480 ] 00:07:37.480 }' 00:07:37.480 15:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.480 15:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.740 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.740 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.740 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.740 [2024-11-26 15:23:36.144084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.740 [2024-11-26 15:23:36.144206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.740 [2024-11-26 15:23:36.146725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.740 [2024-11-26 15:23:36.146828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.741 [2024-11-26 15:23:36.146878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.741 [2024-11-26 15:23:36.146923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:37.741 { 00:07:37.741 "results": [ 00:07:37.741 { 00:07:37.741 "job": "raid_bdev1", 00:07:37.741 "core_mask": "0x1", 00:07:37.741 "workload": "randrw", 00:07:37.741 "percentage": 50, 00:07:37.741 "status": "finished", 00:07:37.741 "queue_depth": 1, 00:07:37.741 "io_size": 131072, 00:07:37.741 "runtime": 1.34689, 00:07:37.741 "iops": 18265.04020372859, 00:07:37.741 "mibps": 2283.1300254660737, 00:07:37.741 "io_failed": 1, 00:07:37.741 "io_timeout": 0, 00:07:37.741 "avg_latency_us": 75.67279024190096, 00:07:37.741 "min_latency_us": 24.20988407565589, 00:07:37.741 "max_latency_us": 1378.0667654493159 00:07:37.741 } 00:07:37.741 ], 00:07:37.741 "core_count": 1 00:07:37.741 } 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75329 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75329 ']' 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75329 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75329 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75329' 00:07:37.741 killing process with pid 75329 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75329 00:07:37.741 [2024-11-26 15:23:36.191776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.741 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75329 00:07:37.741 [2024-11-26 15:23:36.206487] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iLoMDi8PqL 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:38.001 ************************************ 00:07:38.001 END TEST raid_read_error_test 00:07:38.001 ************************************ 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:38.001 00:07:38.001 real 0m3.111s 00:07:38.001 user 0m3.949s 00:07:38.001 sys 0m0.488s 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.001 15:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.273 15:23:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:38.273 15:23:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.273 15:23:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.273 15:23:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.273 ************************************ 00:07:38.273 START TEST raid_write_error_test 00:07:38.273 ************************************ 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ogbBLB1adX 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75458 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75458 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75458 ']' 00:07:38.273 15:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.274 15:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.274 15:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.274 15:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.274 15:23:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.274 [2024-11-26 15:23:36.592611] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:38.274 [2024-11-26 15:23:36.592723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75458 ] 00:07:38.274 [2024-11-26 15:23:36.726430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.547 [2024-11-26 15:23:36.766393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.547 [2024-11-26 15:23:36.791265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.547 [2024-11-26 15:23:36.833531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.547 [2024-11-26 15:23:36.833647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 BaseBdev1_malloc 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 true 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 [2024-11-26 15:23:37.440934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.117 [2024-11-26 15:23:37.440994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.117 [2024-11-26 15:23:37.441013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.117 [2024-11-26 15:23:37.441034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.117 [2024-11-26 15:23:37.443164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.117 [2024-11-26 15:23:37.443216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.117 BaseBdev1 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 BaseBdev2_malloc 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 true 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 [2024-11-26 15:23:37.481608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.117 [2024-11-26 15:23:37.481699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.117 [2024-11-26 15:23:37.481718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.117 [2024-11-26 15:23:37.481728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.117 [2024-11-26 15:23:37.483762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.117 [2024-11-26 15:23:37.483796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.117 BaseBdev2 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 [2024-11-26 15:23:37.493630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.117 [2024-11-26 15:23:37.495454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.117 [2024-11-26 15:23:37.495611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:39.117 [2024-11-26 15:23:37.495630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.117 [2024-11-26 15:23:37.495862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:39.117 [2024-11-26 15:23:37.496012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:39.117 [2024-11-26 15:23:37.496021] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:39.117 [2024-11-26 15:23:37.496126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.117 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.117 "name": "raid_bdev1", 00:07:39.117 "uuid": "3fb36f95-1332-4944-9de1-fa7a93e46be0", 00:07:39.117 "strip_size_kb": 64, 00:07:39.118 "state": "online", 00:07:39.118 "raid_level": "concat", 00:07:39.118 "superblock": true, 00:07:39.118 "num_base_bdevs": 2, 00:07:39.118 "num_base_bdevs_discovered": 2, 00:07:39.118 "num_base_bdevs_operational": 2, 00:07:39.118 "base_bdevs_list": [ 00:07:39.118 { 00:07:39.118 "name": "BaseBdev1", 00:07:39.118 "uuid": "8516a53e-ceee-5097-865d-ae02694238b8", 00:07:39.118 "is_configured": true, 00:07:39.118 "data_offset": 2048, 00:07:39.118 "data_size": 63488 00:07:39.118 }, 00:07:39.118 { 00:07:39.118 "name": "BaseBdev2", 00:07:39.118 "uuid": "4a40aa1c-ce78-5e73-b7d5-01e826842416", 00:07:39.118 "is_configured": true, 00:07:39.118 "data_offset": 2048, 00:07:39.118 "data_size": 63488 00:07:39.118 } 00:07:39.118 ] 00:07:39.118 }' 00:07:39.118 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.118 15:23:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.688 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:39.688 15:23:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:39.688 [2024-11-26 15:23:38.026131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.626 15:23:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.627 15:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.627 15:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.627 15:23:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.627 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.627 "name": "raid_bdev1", 00:07:40.627 "uuid": "3fb36f95-1332-4944-9de1-fa7a93e46be0", 00:07:40.627 "strip_size_kb": 64, 00:07:40.627 "state": "online", 00:07:40.627 "raid_level": "concat", 00:07:40.627 "superblock": true, 00:07:40.627 "num_base_bdevs": 2, 00:07:40.627 "num_base_bdevs_discovered": 2, 00:07:40.627 "num_base_bdevs_operational": 2, 00:07:40.627 "base_bdevs_list": [ 00:07:40.627 { 00:07:40.627 "name": "BaseBdev1", 00:07:40.627 "uuid": "8516a53e-ceee-5097-865d-ae02694238b8", 00:07:40.627 "is_configured": true, 00:07:40.627 "data_offset": 2048, 00:07:40.627 "data_size": 63488 00:07:40.627 }, 00:07:40.627 { 00:07:40.627 "name": "BaseBdev2", 00:07:40.627 "uuid": "4a40aa1c-ce78-5e73-b7d5-01e826842416", 00:07:40.627 "is_configured": true, 00:07:40.627 "data_offset": 2048, 00:07:40.627 "data_size": 63488 00:07:40.627 } 00:07:40.627 ] 00:07:40.627 }' 00:07:40.627 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.627 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.197 [2024-11-26 15:23:39.404436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.197 [2024-11-26 15:23:39.404534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.197 [2024-11-26 15:23:39.407014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.197 [2024-11-26 15:23:39.407111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.197 [2024-11-26 15:23:39.407163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.197 [2024-11-26 15:23:39.407223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:41.197 { 00:07:41.197 "results": [ 00:07:41.197 { 00:07:41.197 "job": "raid_bdev1", 00:07:41.197 "core_mask": "0x1", 00:07:41.197 "workload": "randrw", 00:07:41.197 "percentage": 50, 00:07:41.197 "status": "finished", 00:07:41.197 "queue_depth": 1, 00:07:41.197 "io_size": 131072, 00:07:41.197 "runtime": 1.376456, 00:07:41.197 "iops": 18194.551805506315, 00:07:41.197 "mibps": 2274.3189756882894, 00:07:41.197 "io_failed": 1, 00:07:41.197 "io_timeout": 0, 00:07:41.197 "avg_latency_us": 75.94980609904353, 00:07:41.197 "min_latency_us": 24.321450361718817, 00:07:41.197 "max_latency_us": 1356.646038525233 00:07:41.197 } 00:07:41.197 ], 00:07:41.197 "core_count": 1 00:07:41.197 } 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75458 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75458 ']' 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75458 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75458 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75458' 00:07:41.197 killing process with pid 75458 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75458 00:07:41.197 [2024-11-26 15:23:39.438914] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75458 00:07:41.197 [2024-11-26 15:23:39.454425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ogbBLB1adX 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.197 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.197 ************************************ 00:07:41.198 END TEST raid_write_error_test 00:07:41.198 ************************************ 00:07:41.198 15:23:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:41.198 00:07:41.198 real 0m3.176s 00:07:41.198 user 0m4.051s 00:07:41.198 sys 0m0.490s 00:07:41.198 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.198 15:23:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.458 15:23:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:41.458 15:23:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:41.458 15:23:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.458 15:23:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.458 15:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.458 ************************************ 00:07:41.458 START TEST raid_state_function_test 00:07:41.458 ************************************ 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.458 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75585 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75585' 00:07:41.459 Process raid pid: 75585 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75585 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75585 ']' 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.459 15:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.459 [2024-11-26 15:23:39.832604] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:41.459 [2024-11-26 15:23:39.832731] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.719 [2024-11-26 15:23:39.968163] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.719 [2024-11-26 15:23:40.004066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.719 [2024-11-26 15:23:40.028606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.719 [2024-11-26 15:23:40.070906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.719 [2024-11-26 15:23:40.070943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.289 [2024-11-26 15:23:40.653647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.289 [2024-11-26 15:23:40.653697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.289 [2024-11-26 15:23:40.653711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.289 [2024-11-26 15:23:40.653719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.289 "name": "Existed_Raid", 00:07:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.289 "strip_size_kb": 0, 00:07:42.289 "state": "configuring", 00:07:42.289 "raid_level": "raid1", 00:07:42.289 "superblock": false, 00:07:42.289 "num_base_bdevs": 2, 00:07:42.289 "num_base_bdevs_discovered": 0, 00:07:42.289 "num_base_bdevs_operational": 2, 00:07:42.289 "base_bdevs_list": [ 00:07:42.289 { 00:07:42.289 "name": "BaseBdev1", 00:07:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.289 "is_configured": false, 00:07:42.289 "data_offset": 0, 00:07:42.289 "data_size": 0 00:07:42.289 }, 00:07:42.289 { 00:07:42.289 "name": "BaseBdev2", 00:07:42.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.289 "is_configured": false, 00:07:42.289 "data_offset": 0, 00:07:42.289 "data_size": 0 00:07:42.289 } 00:07:42.289 ] 00:07:42.289 }' 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.289 15:23:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.859 [2024-11-26 15:23:41.073674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.859 [2024-11-26 15:23:41.073759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.859 [2024-11-26 15:23:41.081701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.859 [2024-11-26 15:23:41.081777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.859 [2024-11-26 15:23:41.081807] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.859 [2024-11-26 15:23:41.081828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.859 [2024-11-26 15:23:41.098520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.859 BaseBdev1 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.859 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.860 [ 00:07:42.860 { 00:07:42.860 "name": "BaseBdev1", 00:07:42.860 "aliases": [ 00:07:42.860 "508e8885-29c2-45c5-bb7d-c93007a7ac37" 00:07:42.860 ], 00:07:42.860 "product_name": "Malloc disk", 00:07:42.860 "block_size": 512, 00:07:42.860 "num_blocks": 65536, 00:07:42.860 "uuid": "508e8885-29c2-45c5-bb7d-c93007a7ac37", 00:07:42.860 "assigned_rate_limits": { 00:07:42.860 "rw_ios_per_sec": 0, 00:07:42.860 "rw_mbytes_per_sec": 0, 00:07:42.860 "r_mbytes_per_sec": 0, 00:07:42.860 "w_mbytes_per_sec": 0 00:07:42.860 }, 00:07:42.860 "claimed": true, 00:07:42.860 "claim_type": "exclusive_write", 00:07:42.860 "zoned": false, 00:07:42.860 "supported_io_types": { 00:07:42.860 "read": true, 00:07:42.860 "write": true, 00:07:42.860 "unmap": true, 00:07:42.860 "flush": true, 00:07:42.860 "reset": true, 00:07:42.860 "nvme_admin": false, 00:07:42.860 "nvme_io": false, 00:07:42.860 "nvme_io_md": false, 00:07:42.860 "write_zeroes": true, 00:07:42.860 "zcopy": true, 00:07:42.860 "get_zone_info": false, 00:07:42.860 "zone_management": false, 00:07:42.860 "zone_append": false, 00:07:42.860 "compare": false, 00:07:42.860 "compare_and_write": false, 00:07:42.860 "abort": true, 00:07:42.860 "seek_hole": false, 00:07:42.860 "seek_data": false, 00:07:42.860 "copy": true, 00:07:42.860 "nvme_iov_md": false 00:07:42.860 }, 00:07:42.860 "memory_domains": [ 00:07:42.860 { 00:07:42.860 "dma_device_id": "system", 00:07:42.860 "dma_device_type": 1 00:07:42.860 }, 00:07:42.860 { 00:07:42.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.860 "dma_device_type": 2 00:07:42.860 } 00:07:42.860 ], 00:07:42.860 "driver_specific": {} 00:07:42.860 } 00:07:42.860 ] 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.860 "name": "Existed_Raid", 00:07:42.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.860 "strip_size_kb": 0, 00:07:42.860 "state": "configuring", 00:07:42.860 "raid_level": "raid1", 00:07:42.860 "superblock": false, 00:07:42.860 "num_base_bdevs": 2, 00:07:42.860 "num_base_bdevs_discovered": 1, 00:07:42.860 "num_base_bdevs_operational": 2, 00:07:42.860 "base_bdevs_list": [ 00:07:42.860 { 00:07:42.860 "name": "BaseBdev1", 00:07:42.860 "uuid": "508e8885-29c2-45c5-bb7d-c93007a7ac37", 00:07:42.860 "is_configured": true, 00:07:42.860 "data_offset": 0, 00:07:42.860 "data_size": 65536 00:07:42.860 }, 00:07:42.860 { 00:07:42.860 "name": "BaseBdev2", 00:07:42.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.860 "is_configured": false, 00:07:42.860 "data_offset": 0, 00:07:42.860 "data_size": 0 00:07:42.860 } 00:07:42.860 ] 00:07:42.860 }' 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.860 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.120 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.120 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 [2024-11-26 15:23:41.578661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.120 [2024-11-26 15:23:41.578707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:43.120 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.120 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.120 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.120 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 [2024-11-26 15:23:41.590704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.120 [2024-11-26 15:23:41.592558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.120 [2024-11-26 15:23:41.592639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.380 "name": "Existed_Raid", 00:07:43.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.380 "strip_size_kb": 0, 00:07:43.380 "state": "configuring", 00:07:43.380 "raid_level": "raid1", 00:07:43.380 "superblock": false, 00:07:43.380 "num_base_bdevs": 2, 00:07:43.380 "num_base_bdevs_discovered": 1, 00:07:43.380 "num_base_bdevs_operational": 2, 00:07:43.380 "base_bdevs_list": [ 00:07:43.380 { 00:07:43.380 "name": "BaseBdev1", 00:07:43.380 "uuid": "508e8885-29c2-45c5-bb7d-c93007a7ac37", 00:07:43.380 "is_configured": true, 00:07:43.380 "data_offset": 0, 00:07:43.380 "data_size": 65536 00:07:43.380 }, 00:07:43.380 { 00:07:43.380 "name": "BaseBdev2", 00:07:43.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.380 "is_configured": false, 00:07:43.380 "data_offset": 0, 00:07:43.380 "data_size": 0 00:07:43.380 } 00:07:43.380 ] 00:07:43.380 }' 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.380 15:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.641 [2024-11-26 15:23:42.049792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.641 [2024-11-26 15:23:42.049899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:43.641 [2024-11-26 15:23:42.049940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:43.641 [2024-11-26 15:23:42.050274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:43.641 [2024-11-26 15:23:42.050469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:43.641 [2024-11-26 15:23:42.050517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:43.641 [2024-11-26 15:23:42.050748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.641 BaseBdev2 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.641 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.641 [ 00:07:43.641 { 00:07:43.641 "name": "BaseBdev2", 00:07:43.641 "aliases": [ 00:07:43.641 "bd48888e-1695-4a7c-86b8-dd128d443360" 00:07:43.641 ], 00:07:43.641 "product_name": "Malloc disk", 00:07:43.641 "block_size": 512, 00:07:43.641 "num_blocks": 65536, 00:07:43.641 "uuid": "bd48888e-1695-4a7c-86b8-dd128d443360", 00:07:43.641 "assigned_rate_limits": { 00:07:43.641 "rw_ios_per_sec": 0, 00:07:43.641 "rw_mbytes_per_sec": 0, 00:07:43.641 "r_mbytes_per_sec": 0, 00:07:43.641 "w_mbytes_per_sec": 0 00:07:43.641 }, 00:07:43.641 "claimed": true, 00:07:43.641 "claim_type": "exclusive_write", 00:07:43.641 "zoned": false, 00:07:43.641 "supported_io_types": { 00:07:43.641 "read": true, 00:07:43.641 "write": true, 00:07:43.641 "unmap": true, 00:07:43.641 "flush": true, 00:07:43.641 "reset": true, 00:07:43.641 "nvme_admin": false, 00:07:43.641 "nvme_io": false, 00:07:43.642 "nvme_io_md": false, 00:07:43.642 "write_zeroes": true, 00:07:43.642 "zcopy": true, 00:07:43.642 "get_zone_info": false, 00:07:43.642 "zone_management": false, 00:07:43.642 "zone_append": false, 00:07:43.642 "compare": false, 00:07:43.642 "compare_and_write": false, 00:07:43.642 "abort": true, 00:07:43.642 "seek_hole": false, 00:07:43.642 "seek_data": false, 00:07:43.642 "copy": true, 00:07:43.642 "nvme_iov_md": false 00:07:43.642 }, 00:07:43.642 "memory_domains": [ 00:07:43.642 { 00:07:43.642 "dma_device_id": "system", 00:07:43.642 "dma_device_type": 1 00:07:43.642 }, 00:07:43.642 { 00:07:43.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.642 "dma_device_type": 2 00:07:43.642 } 00:07:43.642 ], 00:07:43.642 "driver_specific": {} 00:07:43.642 } 00:07:43.642 ] 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.642 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.902 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.902 "name": "Existed_Raid", 00:07:43.902 "uuid": "85cce300-6534-4926-b48a-6ee6357eaf05", 00:07:43.902 "strip_size_kb": 0, 00:07:43.902 "state": "online", 00:07:43.902 "raid_level": "raid1", 00:07:43.902 "superblock": false, 00:07:43.902 "num_base_bdevs": 2, 00:07:43.902 "num_base_bdevs_discovered": 2, 00:07:43.902 "num_base_bdevs_operational": 2, 00:07:43.902 "base_bdevs_list": [ 00:07:43.902 { 00:07:43.902 "name": "BaseBdev1", 00:07:43.902 "uuid": "508e8885-29c2-45c5-bb7d-c93007a7ac37", 00:07:43.902 "is_configured": true, 00:07:43.902 "data_offset": 0, 00:07:43.902 "data_size": 65536 00:07:43.902 }, 00:07:43.902 { 00:07:43.902 "name": "BaseBdev2", 00:07:43.902 "uuid": "bd48888e-1695-4a7c-86b8-dd128d443360", 00:07:43.902 "is_configured": true, 00:07:43.902 "data_offset": 0, 00:07:43.902 "data_size": 65536 00:07:43.902 } 00:07:43.902 ] 00:07:43.902 }' 00:07:43.902 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.902 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.162 [2024-11-26 15:23:42.526251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.162 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.162 "name": "Existed_Raid", 00:07:44.162 "aliases": [ 00:07:44.162 "85cce300-6534-4926-b48a-6ee6357eaf05" 00:07:44.162 ], 00:07:44.162 "product_name": "Raid Volume", 00:07:44.162 "block_size": 512, 00:07:44.162 "num_blocks": 65536, 00:07:44.162 "uuid": "85cce300-6534-4926-b48a-6ee6357eaf05", 00:07:44.162 "assigned_rate_limits": { 00:07:44.162 "rw_ios_per_sec": 0, 00:07:44.162 "rw_mbytes_per_sec": 0, 00:07:44.162 "r_mbytes_per_sec": 0, 00:07:44.162 "w_mbytes_per_sec": 0 00:07:44.162 }, 00:07:44.162 "claimed": false, 00:07:44.162 "zoned": false, 00:07:44.163 "supported_io_types": { 00:07:44.163 "read": true, 00:07:44.163 "write": true, 00:07:44.163 "unmap": false, 00:07:44.163 "flush": false, 00:07:44.163 "reset": true, 00:07:44.163 "nvme_admin": false, 00:07:44.163 "nvme_io": false, 00:07:44.163 "nvme_io_md": false, 00:07:44.163 "write_zeroes": true, 00:07:44.163 "zcopy": false, 00:07:44.163 "get_zone_info": false, 00:07:44.163 "zone_management": false, 00:07:44.163 "zone_append": false, 00:07:44.163 "compare": false, 00:07:44.163 "compare_and_write": false, 00:07:44.163 "abort": false, 00:07:44.163 "seek_hole": false, 00:07:44.163 "seek_data": false, 00:07:44.163 "copy": false, 00:07:44.163 "nvme_iov_md": false 00:07:44.163 }, 00:07:44.163 "memory_domains": [ 00:07:44.163 { 00:07:44.163 "dma_device_id": "system", 00:07:44.163 "dma_device_type": 1 00:07:44.163 }, 00:07:44.163 { 00:07:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.163 "dma_device_type": 2 00:07:44.163 }, 00:07:44.163 { 00:07:44.163 "dma_device_id": "system", 00:07:44.163 "dma_device_type": 1 00:07:44.163 }, 00:07:44.163 { 00:07:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.163 "dma_device_type": 2 00:07:44.163 } 00:07:44.163 ], 00:07:44.163 "driver_specific": { 00:07:44.163 "raid": { 00:07:44.163 "uuid": "85cce300-6534-4926-b48a-6ee6357eaf05", 00:07:44.163 "strip_size_kb": 0, 00:07:44.163 "state": "online", 00:07:44.163 "raid_level": "raid1", 00:07:44.163 "superblock": false, 00:07:44.163 "num_base_bdevs": 2, 00:07:44.163 "num_base_bdevs_discovered": 2, 00:07:44.163 "num_base_bdevs_operational": 2, 00:07:44.163 "base_bdevs_list": [ 00:07:44.163 { 00:07:44.163 "name": "BaseBdev1", 00:07:44.163 "uuid": "508e8885-29c2-45c5-bb7d-c93007a7ac37", 00:07:44.163 "is_configured": true, 00:07:44.163 "data_offset": 0, 00:07:44.163 "data_size": 65536 00:07:44.163 }, 00:07:44.163 { 00:07:44.163 "name": "BaseBdev2", 00:07:44.163 "uuid": "bd48888e-1695-4a7c-86b8-dd128d443360", 00:07:44.163 "is_configured": true, 00:07:44.163 "data_offset": 0, 00:07:44.163 "data_size": 65536 00:07:44.163 } 00:07:44.163 ] 00:07:44.163 } 00:07:44.163 } 00:07:44.163 }' 00:07:44.163 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.163 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:44.163 BaseBdev2' 00:07:44.163 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.423 [2024-11-26 15:23:42.750099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.423 "name": "Existed_Raid", 00:07:44.423 "uuid": "85cce300-6534-4926-b48a-6ee6357eaf05", 00:07:44.423 "strip_size_kb": 0, 00:07:44.423 "state": "online", 00:07:44.423 "raid_level": "raid1", 00:07:44.423 "superblock": false, 00:07:44.423 "num_base_bdevs": 2, 00:07:44.423 "num_base_bdevs_discovered": 1, 00:07:44.423 "num_base_bdevs_operational": 1, 00:07:44.423 "base_bdevs_list": [ 00:07:44.423 { 00:07:44.423 "name": null, 00:07:44.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.423 "is_configured": false, 00:07:44.423 "data_offset": 0, 00:07:44.423 "data_size": 65536 00:07:44.423 }, 00:07:44.423 { 00:07:44.423 "name": "BaseBdev2", 00:07:44.423 "uuid": "bd48888e-1695-4a7c-86b8-dd128d443360", 00:07:44.423 "is_configured": true, 00:07:44.423 "data_offset": 0, 00:07:44.423 "data_size": 65536 00:07:44.423 } 00:07:44.423 ] 00:07:44.423 }' 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.423 15:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 [2024-11-26 15:23:43.241570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.994 [2024-11-26 15:23:43.241658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.994 [2024-11-26 15:23:43.253350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.994 [2024-11-26 15:23:43.253412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.994 [2024-11-26 15:23:43.253426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75585 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75585 ']' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75585 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75585 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75585' 00:07:44.994 killing process with pid 75585 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75585 00:07:44.994 [2024-11-26 15:23:43.321842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.994 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75585 00:07:44.994 [2024-11-26 15:23:43.322860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.255 ************************************ 00:07:45.255 END TEST raid_state_function_test 00:07:45.255 ************************************ 00:07:45.255 00:07:45.255 real 0m3.792s 00:07:45.255 user 0m6.022s 00:07:45.255 sys 0m0.718s 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 15:23:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:45.255 15:23:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.255 15:23:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.255 15:23:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.255 ************************************ 00:07:45.255 START TEST raid_state_function_test_sb 00:07:45.255 ************************************ 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75822 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75822' 00:07:45.255 Process raid pid: 75822 00:07:45.255 15:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75822 00:07:45.256 15:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75822 ']' 00:07:45.256 15:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.256 15:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.256 15:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.256 15:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.256 15:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.256 [2024-11-26 15:23:43.695413] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:45.256 [2024-11-26 15:23:43.695617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.525 [2024-11-26 15:23:43.830346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.525 [2024-11-26 15:23:43.867604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.525 [2024-11-26 15:23:43.892216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.525 [2024-11-26 15:23:43.935132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.525 [2024-11-26 15:23:43.935228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.112 [2024-11-26 15:23:44.522231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.112 [2024-11-26 15:23:44.522280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.112 [2024-11-26 15:23:44.522292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.112 [2024-11-26 15:23:44.522299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.112 "name": "Existed_Raid", 00:07:46.112 "uuid": "43d3c533-1b91-4e86-a4a2-7146835b8a74", 00:07:46.112 "strip_size_kb": 0, 00:07:46.112 "state": "configuring", 00:07:46.112 "raid_level": "raid1", 00:07:46.112 "superblock": true, 00:07:46.112 "num_base_bdevs": 2, 00:07:46.112 "num_base_bdevs_discovered": 0, 00:07:46.112 "num_base_bdevs_operational": 2, 00:07:46.112 "base_bdevs_list": [ 00:07:46.112 { 00:07:46.112 "name": "BaseBdev1", 00:07:46.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.112 "is_configured": false, 00:07:46.112 "data_offset": 0, 00:07:46.112 "data_size": 0 00:07:46.112 }, 00:07:46.112 { 00:07:46.112 "name": "BaseBdev2", 00:07:46.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.112 "is_configured": false, 00:07:46.112 "data_offset": 0, 00:07:46.112 "data_size": 0 00:07:46.112 } 00:07:46.112 ] 00:07:46.112 }' 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.112 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 [2024-11-26 15:23:44.970265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.683 [2024-11-26 15:23:44.970340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 [2024-11-26 15:23:44.982292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.683 [2024-11-26 15:23:44.982368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.683 [2024-11-26 15:23:44.982397] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.683 [2024-11-26 15:23:44.982416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.683 15:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 [2024-11-26 15:23:45.003147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.683 BaseBdev1 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 [ 00:07:46.683 { 00:07:46.683 "name": "BaseBdev1", 00:07:46.683 "aliases": [ 00:07:46.683 "1639e56b-9b8f-4704-97c8-6d17034756e5" 00:07:46.683 ], 00:07:46.683 "product_name": "Malloc disk", 00:07:46.683 "block_size": 512, 00:07:46.683 "num_blocks": 65536, 00:07:46.683 "uuid": "1639e56b-9b8f-4704-97c8-6d17034756e5", 00:07:46.683 "assigned_rate_limits": { 00:07:46.683 "rw_ios_per_sec": 0, 00:07:46.683 "rw_mbytes_per_sec": 0, 00:07:46.683 "r_mbytes_per_sec": 0, 00:07:46.683 "w_mbytes_per_sec": 0 00:07:46.683 }, 00:07:46.683 "claimed": true, 00:07:46.683 "claim_type": "exclusive_write", 00:07:46.683 "zoned": false, 00:07:46.683 "supported_io_types": { 00:07:46.683 "read": true, 00:07:46.683 "write": true, 00:07:46.683 "unmap": true, 00:07:46.683 "flush": true, 00:07:46.683 "reset": true, 00:07:46.683 "nvme_admin": false, 00:07:46.683 "nvme_io": false, 00:07:46.683 "nvme_io_md": false, 00:07:46.683 "write_zeroes": true, 00:07:46.683 "zcopy": true, 00:07:46.683 "get_zone_info": false, 00:07:46.683 "zone_management": false, 00:07:46.683 "zone_append": false, 00:07:46.683 "compare": false, 00:07:46.683 "compare_and_write": false, 00:07:46.683 "abort": true, 00:07:46.683 "seek_hole": false, 00:07:46.683 "seek_data": false, 00:07:46.683 "copy": true, 00:07:46.683 "nvme_iov_md": false 00:07:46.683 }, 00:07:46.683 "memory_domains": [ 00:07:46.683 { 00:07:46.683 "dma_device_id": "system", 00:07:46.683 "dma_device_type": 1 00:07:46.683 }, 00:07:46.683 { 00:07:46.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.683 "dma_device_type": 2 00:07:46.683 } 00:07:46.683 ], 00:07:46.683 "driver_specific": {} 00:07:46.683 } 00:07:46.683 ] 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.683 "name": "Existed_Raid", 00:07:46.683 "uuid": "6fe6c4e9-27c4-4390-b49d-b871fd843395", 00:07:46.683 "strip_size_kb": 0, 00:07:46.683 "state": "configuring", 00:07:46.683 "raid_level": "raid1", 00:07:46.683 "superblock": true, 00:07:46.683 "num_base_bdevs": 2, 00:07:46.683 "num_base_bdevs_discovered": 1, 00:07:46.683 "num_base_bdevs_operational": 2, 00:07:46.683 "base_bdevs_list": [ 00:07:46.683 { 00:07:46.683 "name": "BaseBdev1", 00:07:46.683 "uuid": "1639e56b-9b8f-4704-97c8-6d17034756e5", 00:07:46.683 "is_configured": true, 00:07:46.683 "data_offset": 2048, 00:07:46.683 "data_size": 63488 00:07:46.683 }, 00:07:46.683 { 00:07:46.683 "name": "BaseBdev2", 00:07:46.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.683 "is_configured": false, 00:07:46.683 "data_offset": 0, 00:07:46.683 "data_size": 0 00:07:46.683 } 00:07:46.683 ] 00:07:46.683 }' 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.683 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.943 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.943 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.943 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.943 [2024-11-26 15:23:45.415291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.943 [2024-11-26 15:23:45.415387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.202 [2024-11-26 15:23:45.427336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.202 [2024-11-26 15:23:45.429122] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.202 [2024-11-26 15:23:45.429163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.202 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.203 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.203 "name": "Existed_Raid", 00:07:47.203 "uuid": "973097ff-e68b-49db-8c85-1bf0e133ff39", 00:07:47.203 "strip_size_kb": 0, 00:07:47.203 "state": "configuring", 00:07:47.203 "raid_level": "raid1", 00:07:47.203 "superblock": true, 00:07:47.203 "num_base_bdevs": 2, 00:07:47.203 "num_base_bdevs_discovered": 1, 00:07:47.203 "num_base_bdevs_operational": 2, 00:07:47.203 "base_bdevs_list": [ 00:07:47.203 { 00:07:47.203 "name": "BaseBdev1", 00:07:47.203 "uuid": "1639e56b-9b8f-4704-97c8-6d17034756e5", 00:07:47.203 "is_configured": true, 00:07:47.203 "data_offset": 2048, 00:07:47.203 "data_size": 63488 00:07:47.203 }, 00:07:47.203 { 00:07:47.203 "name": "BaseBdev2", 00:07:47.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.203 "is_configured": false, 00:07:47.203 "data_offset": 0, 00:07:47.203 "data_size": 0 00:07:47.203 } 00:07:47.203 ] 00:07:47.203 }' 00:07:47.203 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.203 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.464 [2024-11-26 15:23:45.854435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.464 [2024-11-26 15:23:45.854718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:47.464 [2024-11-26 15:23:45.854777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.464 [2024-11-26 15:23:45.855067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:47.464 BaseBdev2 00:07:47.464 [2024-11-26 15:23:45.855277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:47.464 [2024-11-26 15:23:45.855321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:07:47.464 [2024-11-26 15:23:45.855472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.464 [ 00:07:47.464 { 00:07:47.464 "name": "BaseBdev2", 00:07:47.464 "aliases": [ 00:07:47.464 "40eb277f-741e-488a-89b0-afcdeb97a5dd" 00:07:47.464 ], 00:07:47.464 "product_name": "Malloc disk", 00:07:47.464 "block_size": 512, 00:07:47.464 "num_blocks": 65536, 00:07:47.464 "uuid": "40eb277f-741e-488a-89b0-afcdeb97a5dd", 00:07:47.464 "assigned_rate_limits": { 00:07:47.464 "rw_ios_per_sec": 0, 00:07:47.464 "rw_mbytes_per_sec": 0, 00:07:47.464 "r_mbytes_per_sec": 0, 00:07:47.464 "w_mbytes_per_sec": 0 00:07:47.464 }, 00:07:47.464 "claimed": true, 00:07:47.464 "claim_type": "exclusive_write", 00:07:47.464 "zoned": false, 00:07:47.464 "supported_io_types": { 00:07:47.464 "read": true, 00:07:47.464 "write": true, 00:07:47.464 "unmap": true, 00:07:47.464 "flush": true, 00:07:47.464 "reset": true, 00:07:47.464 "nvme_admin": false, 00:07:47.464 "nvme_io": false, 00:07:47.464 "nvme_io_md": false, 00:07:47.464 "write_zeroes": true, 00:07:47.464 "zcopy": true, 00:07:47.464 "get_zone_info": false, 00:07:47.464 "zone_management": false, 00:07:47.464 "zone_append": false, 00:07:47.464 "compare": false, 00:07:47.464 "compare_and_write": false, 00:07:47.464 "abort": true, 00:07:47.464 "seek_hole": false, 00:07:47.464 "seek_data": false, 00:07:47.464 "copy": true, 00:07:47.464 "nvme_iov_md": false 00:07:47.464 }, 00:07:47.464 "memory_domains": [ 00:07:47.464 { 00:07:47.464 "dma_device_id": "system", 00:07:47.464 "dma_device_type": 1 00:07:47.464 }, 00:07:47.464 { 00:07:47.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.464 "dma_device_type": 2 00:07:47.464 } 00:07:47.464 ], 00:07:47.464 "driver_specific": {} 00:07:47.464 } 00:07:47.464 ] 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.464 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.723 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.723 "name": "Existed_Raid", 00:07:47.723 "uuid": "973097ff-e68b-49db-8c85-1bf0e133ff39", 00:07:47.723 "strip_size_kb": 0, 00:07:47.723 "state": "online", 00:07:47.723 "raid_level": "raid1", 00:07:47.723 "superblock": true, 00:07:47.723 "num_base_bdevs": 2, 00:07:47.723 "num_base_bdevs_discovered": 2, 00:07:47.723 "num_base_bdevs_operational": 2, 00:07:47.723 "base_bdevs_list": [ 00:07:47.723 { 00:07:47.723 "name": "BaseBdev1", 00:07:47.724 "uuid": "1639e56b-9b8f-4704-97c8-6d17034756e5", 00:07:47.724 "is_configured": true, 00:07:47.724 "data_offset": 2048, 00:07:47.724 "data_size": 63488 00:07:47.724 }, 00:07:47.724 { 00:07:47.724 "name": "BaseBdev2", 00:07:47.724 "uuid": "40eb277f-741e-488a-89b0-afcdeb97a5dd", 00:07:47.724 "is_configured": true, 00:07:47.724 "data_offset": 2048, 00:07:47.724 "data_size": 63488 00:07:47.724 } 00:07:47.724 ] 00:07:47.724 }' 00:07:47.724 15:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.724 15:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.984 [2024-11-26 15:23:46.350908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.984 "name": "Existed_Raid", 00:07:47.984 "aliases": [ 00:07:47.984 "973097ff-e68b-49db-8c85-1bf0e133ff39" 00:07:47.984 ], 00:07:47.984 "product_name": "Raid Volume", 00:07:47.984 "block_size": 512, 00:07:47.984 "num_blocks": 63488, 00:07:47.984 "uuid": "973097ff-e68b-49db-8c85-1bf0e133ff39", 00:07:47.984 "assigned_rate_limits": { 00:07:47.984 "rw_ios_per_sec": 0, 00:07:47.984 "rw_mbytes_per_sec": 0, 00:07:47.984 "r_mbytes_per_sec": 0, 00:07:47.984 "w_mbytes_per_sec": 0 00:07:47.984 }, 00:07:47.984 "claimed": false, 00:07:47.984 "zoned": false, 00:07:47.984 "supported_io_types": { 00:07:47.984 "read": true, 00:07:47.984 "write": true, 00:07:47.984 "unmap": false, 00:07:47.984 "flush": false, 00:07:47.984 "reset": true, 00:07:47.984 "nvme_admin": false, 00:07:47.984 "nvme_io": false, 00:07:47.984 "nvme_io_md": false, 00:07:47.984 "write_zeroes": true, 00:07:47.984 "zcopy": false, 00:07:47.984 "get_zone_info": false, 00:07:47.984 "zone_management": false, 00:07:47.984 "zone_append": false, 00:07:47.984 "compare": false, 00:07:47.984 "compare_and_write": false, 00:07:47.984 "abort": false, 00:07:47.984 "seek_hole": false, 00:07:47.984 "seek_data": false, 00:07:47.984 "copy": false, 00:07:47.984 "nvme_iov_md": false 00:07:47.984 }, 00:07:47.984 "memory_domains": [ 00:07:47.984 { 00:07:47.984 "dma_device_id": "system", 00:07:47.984 "dma_device_type": 1 00:07:47.984 }, 00:07:47.984 { 00:07:47.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.984 "dma_device_type": 2 00:07:47.984 }, 00:07:47.984 { 00:07:47.984 "dma_device_id": "system", 00:07:47.984 "dma_device_type": 1 00:07:47.984 }, 00:07:47.984 { 00:07:47.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.984 "dma_device_type": 2 00:07:47.984 } 00:07:47.984 ], 00:07:47.984 "driver_specific": { 00:07:47.984 "raid": { 00:07:47.984 "uuid": "973097ff-e68b-49db-8c85-1bf0e133ff39", 00:07:47.984 "strip_size_kb": 0, 00:07:47.984 "state": "online", 00:07:47.984 "raid_level": "raid1", 00:07:47.984 "superblock": true, 00:07:47.984 "num_base_bdevs": 2, 00:07:47.984 "num_base_bdevs_discovered": 2, 00:07:47.984 "num_base_bdevs_operational": 2, 00:07:47.984 "base_bdevs_list": [ 00:07:47.984 { 00:07:47.984 "name": "BaseBdev1", 00:07:47.984 "uuid": "1639e56b-9b8f-4704-97c8-6d17034756e5", 00:07:47.984 "is_configured": true, 00:07:47.984 "data_offset": 2048, 00:07:47.984 "data_size": 63488 00:07:47.984 }, 00:07:47.984 { 00:07:47.984 "name": "BaseBdev2", 00:07:47.984 "uuid": "40eb277f-741e-488a-89b0-afcdeb97a5dd", 00:07:47.984 "is_configured": true, 00:07:47.984 "data_offset": 2048, 00:07:47.984 "data_size": 63488 00:07:47.984 } 00:07:47.984 ] 00:07:47.984 } 00:07:47.984 } 00:07:47.984 }' 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.984 BaseBdev2' 00:07:47.984 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.244 [2024-11-26 15:23:46.542724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.244 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.245 "name": "Existed_Raid", 00:07:48.245 "uuid": "973097ff-e68b-49db-8c85-1bf0e133ff39", 00:07:48.245 "strip_size_kb": 0, 00:07:48.245 "state": "online", 00:07:48.245 "raid_level": "raid1", 00:07:48.245 "superblock": true, 00:07:48.245 "num_base_bdevs": 2, 00:07:48.245 "num_base_bdevs_discovered": 1, 00:07:48.245 "num_base_bdevs_operational": 1, 00:07:48.245 "base_bdevs_list": [ 00:07:48.245 { 00:07:48.245 "name": null, 00:07:48.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.245 "is_configured": false, 00:07:48.245 "data_offset": 0, 00:07:48.245 "data_size": 63488 00:07:48.245 }, 00:07:48.245 { 00:07:48.245 "name": "BaseBdev2", 00:07:48.245 "uuid": "40eb277f-741e-488a-89b0-afcdeb97a5dd", 00:07:48.245 "is_configured": true, 00:07:48.245 "data_offset": 2048, 00:07:48.245 "data_size": 63488 00:07:48.245 } 00:07:48.245 ] 00:07:48.245 }' 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.245 15:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.813 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.813 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.813 15:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.813 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.813 [2024-11-26 15:23:47.054101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.813 [2024-11-26 15:23:47.054223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.813 [2024-11-26 15:23:47.065730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.813 [2024-11-26 15:23:47.065784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.814 [2024-11-26 15:23:47.065795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75822 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75822 ']' 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75822 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75822 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75822' 00:07:48.814 killing process with pid 75822 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75822 00:07:48.814 [2024-11-26 15:23:47.161679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.814 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75822 00:07:48.814 [2024-11-26 15:23:47.162689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.073 15:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.073 00:07:49.073 real 0m3.774s 00:07:49.073 user 0m5.926s 00:07:49.073 sys 0m0.787s 00:07:49.073 ************************************ 00:07:49.073 END TEST raid_state_function_test_sb 00:07:49.073 ************************************ 00:07:49.073 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.073 15:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 15:23:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:49.073 15:23:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:49.073 15:23:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.073 15:23:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 ************************************ 00:07:49.073 START TEST raid_superblock_test 00:07:49.073 ************************************ 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76063 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76063 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76063 ']' 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.073 15:23:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.073 [2024-11-26 15:23:47.535390] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:49.073 [2024-11-26 15:23:47.535599] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76063 ] 00:07:49.333 [2024-11-26 15:23:47.669381] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.333 [2024-11-26 15:23:47.707523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.333 [2024-11-26 15:23:47.732137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.333 [2024-11-26 15:23:47.775504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.333 [2024-11-26 15:23:47.775618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.902 malloc1 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.902 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.902 [2024-11-26 15:23:48.375721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.161 [2024-11-26 15:23:48.375844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.161 [2024-11-26 15:23:48.375879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:50.161 [2024-11-26 15:23:48.375890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.161 [2024-11-26 15:23:48.378034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.161 [2024-11-26 15:23:48.378071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.161 pt1 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.161 malloc2 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.161 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.161 [2024-11-26 15:23:48.404209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.161 [2024-11-26 15:23:48.404295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.162 [2024-11-26 15:23:48.404345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:50.162 [2024-11-26 15:23:48.404372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.162 [2024-11-26 15:23:48.406445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.162 [2024-11-26 15:23:48.406510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.162 pt2 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.162 [2024-11-26 15:23:48.416240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.162 [2024-11-26 15:23:48.418057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.162 [2024-11-26 15:23:48.418257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:50.162 [2024-11-26 15:23:48.418304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.162 [2024-11-26 15:23:48.418580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:50.162 [2024-11-26 15:23:48.418761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:50.162 [2024-11-26 15:23:48.418807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:50.162 [2024-11-26 15:23:48.418977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.162 "name": "raid_bdev1", 00:07:50.162 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:50.162 "strip_size_kb": 0, 00:07:50.162 "state": "online", 00:07:50.162 "raid_level": "raid1", 00:07:50.162 "superblock": true, 00:07:50.162 "num_base_bdevs": 2, 00:07:50.162 "num_base_bdevs_discovered": 2, 00:07:50.162 "num_base_bdevs_operational": 2, 00:07:50.162 "base_bdevs_list": [ 00:07:50.162 { 00:07:50.162 "name": "pt1", 00:07:50.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.162 "is_configured": true, 00:07:50.162 "data_offset": 2048, 00:07:50.162 "data_size": 63488 00:07:50.162 }, 00:07:50.162 { 00:07:50.162 "name": "pt2", 00:07:50.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.162 "is_configured": true, 00:07:50.162 "data_offset": 2048, 00:07:50.162 "data_size": 63488 00:07:50.162 } 00:07:50.162 ] 00:07:50.162 }' 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.162 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.421 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:50.421 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:50.421 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.421 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.421 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.421 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.681 [2024-11-26 15:23:48.900636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.681 "name": "raid_bdev1", 00:07:50.681 "aliases": [ 00:07:50.681 "296fe9fa-114b-4686-aa76-6b71ae9da3ea" 00:07:50.681 ], 00:07:50.681 "product_name": "Raid Volume", 00:07:50.681 "block_size": 512, 00:07:50.681 "num_blocks": 63488, 00:07:50.681 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:50.681 "assigned_rate_limits": { 00:07:50.681 "rw_ios_per_sec": 0, 00:07:50.681 "rw_mbytes_per_sec": 0, 00:07:50.681 "r_mbytes_per_sec": 0, 00:07:50.681 "w_mbytes_per_sec": 0 00:07:50.681 }, 00:07:50.681 "claimed": false, 00:07:50.681 "zoned": false, 00:07:50.681 "supported_io_types": { 00:07:50.681 "read": true, 00:07:50.681 "write": true, 00:07:50.681 "unmap": false, 00:07:50.681 "flush": false, 00:07:50.681 "reset": true, 00:07:50.681 "nvme_admin": false, 00:07:50.681 "nvme_io": false, 00:07:50.681 "nvme_io_md": false, 00:07:50.681 "write_zeroes": true, 00:07:50.681 "zcopy": false, 00:07:50.681 "get_zone_info": false, 00:07:50.681 "zone_management": false, 00:07:50.681 "zone_append": false, 00:07:50.681 "compare": false, 00:07:50.681 "compare_and_write": false, 00:07:50.681 "abort": false, 00:07:50.681 "seek_hole": false, 00:07:50.681 "seek_data": false, 00:07:50.681 "copy": false, 00:07:50.681 "nvme_iov_md": false 00:07:50.681 }, 00:07:50.681 "memory_domains": [ 00:07:50.681 { 00:07:50.681 "dma_device_id": "system", 00:07:50.681 "dma_device_type": 1 00:07:50.681 }, 00:07:50.681 { 00:07:50.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.681 "dma_device_type": 2 00:07:50.681 }, 00:07:50.681 { 00:07:50.681 "dma_device_id": "system", 00:07:50.681 "dma_device_type": 1 00:07:50.681 }, 00:07:50.681 { 00:07:50.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.681 "dma_device_type": 2 00:07:50.681 } 00:07:50.681 ], 00:07:50.681 "driver_specific": { 00:07:50.681 "raid": { 00:07:50.681 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:50.681 "strip_size_kb": 0, 00:07:50.681 "state": "online", 00:07:50.681 "raid_level": "raid1", 00:07:50.681 "superblock": true, 00:07:50.681 "num_base_bdevs": 2, 00:07:50.681 "num_base_bdevs_discovered": 2, 00:07:50.681 "num_base_bdevs_operational": 2, 00:07:50.681 "base_bdevs_list": [ 00:07:50.681 { 00:07:50.681 "name": "pt1", 00:07:50.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.681 "is_configured": true, 00:07:50.681 "data_offset": 2048, 00:07:50.681 "data_size": 63488 00:07:50.681 }, 00:07:50.681 { 00:07:50.681 "name": "pt2", 00:07:50.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.681 "is_configured": true, 00:07:50.681 "data_offset": 2048, 00:07:50.681 "data_size": 63488 00:07:50.681 } 00:07:50.681 ] 00:07:50.681 } 00:07:50.681 } 00:07:50.681 }' 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:50.681 pt2' 00:07:50.681 15:23:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.681 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.682 [2024-11-26 15:23:49.108642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.682 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.682 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=296fe9fa-114b-4686-aa76-6b71ae9da3ea 00:07:50.682 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 296fe9fa-114b-4686-aa76-6b71ae9da3ea ']' 00:07:50.682 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.682 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.682 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 [2024-11-26 15:23:49.156406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.941 [2024-11-26 15:23:49.156468] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.941 [2024-11-26 15:23:49.156570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.941 [2024-11-26 15:23:49.156664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.941 [2024-11-26 15:23:49.156707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 [2024-11-26 15:23:49.292465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:50.941 [2024-11-26 15:23:49.294371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:50.941 [2024-11-26 15:23:49.294473] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:50.941 [2024-11-26 15:23:49.294550] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:50.941 [2024-11-26 15:23:49.294606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.941 [2024-11-26 15:23:49.294629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:07:50.941 request: 00:07:50.941 { 00:07:50.941 "name": "raid_bdev1", 00:07:50.941 "raid_level": "raid1", 00:07:50.941 "base_bdevs": [ 00:07:50.941 "malloc1", 00:07:50.941 "malloc2" 00:07:50.941 ], 00:07:50.941 "superblock": false, 00:07:50.941 "method": "bdev_raid_create", 00:07:50.941 "req_id": 1 00:07:50.941 } 00:07:50.941 Got JSON-RPC error response 00:07:50.941 response: 00:07:50.941 { 00:07:50.941 "code": -17, 00:07:50.941 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:50.941 } 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.941 [2024-11-26 15:23:49.360459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.941 [2024-11-26 15:23:49.360544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.941 [2024-11-26 15:23:49.360576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:50.941 [2024-11-26 15:23:49.360606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.941 [2024-11-26 15:23:49.362755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.941 [2024-11-26 15:23:49.362825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.941 [2024-11-26 15:23:49.362891] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:50.941 [2024-11-26 15:23:49.362952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.941 pt1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.941 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.942 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.942 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.942 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.942 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.942 "name": "raid_bdev1", 00:07:50.942 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:50.942 "strip_size_kb": 0, 00:07:50.942 "state": "configuring", 00:07:50.942 "raid_level": "raid1", 00:07:50.942 "superblock": true, 00:07:50.942 "num_base_bdevs": 2, 00:07:50.942 "num_base_bdevs_discovered": 1, 00:07:50.942 "num_base_bdevs_operational": 2, 00:07:50.942 "base_bdevs_list": [ 00:07:50.942 { 00:07:50.942 "name": "pt1", 00:07:50.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.942 "is_configured": true, 00:07:50.942 "data_offset": 2048, 00:07:50.942 "data_size": 63488 00:07:50.942 }, 00:07:50.942 { 00:07:50.942 "name": null, 00:07:50.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.942 "is_configured": false, 00:07:50.942 "data_offset": 2048, 00:07:50.942 "data_size": 63488 00:07:50.942 } 00:07:50.942 ] 00:07:50.942 }' 00:07:50.942 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.942 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.511 [2024-11-26 15:23:49.812597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:51.511 [2024-11-26 15:23:49.812710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.511 [2024-11-26 15:23:49.812751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:51.511 [2024-11-26 15:23:49.812790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.511 [2024-11-26 15:23:49.813234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.511 [2024-11-26 15:23:49.813294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:51.511 [2024-11-26 15:23:49.813392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:51.511 [2024-11-26 15:23:49.813445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:51.511 [2024-11-26 15:23:49.813575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:51.511 [2024-11-26 15:23:49.813590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.511 [2024-11-26 15:23:49.813825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:51.511 [2024-11-26 15:23:49.813953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:51.511 [2024-11-26 15:23:49.813962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:51.511 [2024-11-26 15:23:49.814069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.511 pt2 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.511 "name": "raid_bdev1", 00:07:51.511 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:51.511 "strip_size_kb": 0, 00:07:51.511 "state": "online", 00:07:51.511 "raid_level": "raid1", 00:07:51.511 "superblock": true, 00:07:51.511 "num_base_bdevs": 2, 00:07:51.511 "num_base_bdevs_discovered": 2, 00:07:51.511 "num_base_bdevs_operational": 2, 00:07:51.511 "base_bdevs_list": [ 00:07:51.511 { 00:07:51.511 "name": "pt1", 00:07:51.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.511 "is_configured": true, 00:07:51.511 "data_offset": 2048, 00:07:51.511 "data_size": 63488 00:07:51.511 }, 00:07:51.511 { 00:07:51.511 "name": "pt2", 00:07:51.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.511 "is_configured": true, 00:07:51.511 "data_offset": 2048, 00:07:51.511 "data_size": 63488 00:07:51.511 } 00:07:51.511 ] 00:07:51.511 }' 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.511 15:23:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.821 [2024-11-26 15:23:50.248994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.821 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.821 "name": "raid_bdev1", 00:07:51.821 "aliases": [ 00:07:51.821 "296fe9fa-114b-4686-aa76-6b71ae9da3ea" 00:07:51.821 ], 00:07:51.821 "product_name": "Raid Volume", 00:07:51.821 "block_size": 512, 00:07:51.821 "num_blocks": 63488, 00:07:51.821 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:51.821 "assigned_rate_limits": { 00:07:51.821 "rw_ios_per_sec": 0, 00:07:51.821 "rw_mbytes_per_sec": 0, 00:07:51.821 "r_mbytes_per_sec": 0, 00:07:51.821 "w_mbytes_per_sec": 0 00:07:51.821 }, 00:07:51.821 "claimed": false, 00:07:51.821 "zoned": false, 00:07:51.821 "supported_io_types": { 00:07:51.821 "read": true, 00:07:51.821 "write": true, 00:07:51.821 "unmap": false, 00:07:51.821 "flush": false, 00:07:51.821 "reset": true, 00:07:51.822 "nvme_admin": false, 00:07:51.822 "nvme_io": false, 00:07:51.822 "nvme_io_md": false, 00:07:51.822 "write_zeroes": true, 00:07:51.822 "zcopy": false, 00:07:51.822 "get_zone_info": false, 00:07:51.822 "zone_management": false, 00:07:51.822 "zone_append": false, 00:07:51.822 "compare": false, 00:07:51.822 "compare_and_write": false, 00:07:51.822 "abort": false, 00:07:51.822 "seek_hole": false, 00:07:51.822 "seek_data": false, 00:07:51.822 "copy": false, 00:07:51.822 "nvme_iov_md": false 00:07:51.822 }, 00:07:51.822 "memory_domains": [ 00:07:51.822 { 00:07:51.822 "dma_device_id": "system", 00:07:51.822 "dma_device_type": 1 00:07:51.822 }, 00:07:51.822 { 00:07:51.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.822 "dma_device_type": 2 00:07:51.822 }, 00:07:51.822 { 00:07:51.822 "dma_device_id": "system", 00:07:51.822 "dma_device_type": 1 00:07:51.822 }, 00:07:51.822 { 00:07:51.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.822 "dma_device_type": 2 00:07:51.822 } 00:07:51.822 ], 00:07:51.822 "driver_specific": { 00:07:51.822 "raid": { 00:07:51.822 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:51.822 "strip_size_kb": 0, 00:07:51.822 "state": "online", 00:07:51.822 "raid_level": "raid1", 00:07:51.822 "superblock": true, 00:07:51.822 "num_base_bdevs": 2, 00:07:51.822 "num_base_bdevs_discovered": 2, 00:07:51.822 "num_base_bdevs_operational": 2, 00:07:51.822 "base_bdevs_list": [ 00:07:51.822 { 00:07:51.822 "name": "pt1", 00:07:51.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.822 "is_configured": true, 00:07:51.822 "data_offset": 2048, 00:07:51.822 "data_size": 63488 00:07:51.822 }, 00:07:51.822 { 00:07:51.822 "name": "pt2", 00:07:51.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.822 "is_configured": true, 00:07:51.822 "data_offset": 2048, 00:07:51.822 "data_size": 63488 00:07:51.822 } 00:07:51.822 ] 00:07:51.822 } 00:07:51.822 } 00:07:51.822 }' 00:07:51.822 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:52.081 pt2' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.081 [2024-11-26 15:23:50.477018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 296fe9fa-114b-4686-aa76-6b71ae9da3ea '!=' 296fe9fa-114b-4686-aa76-6b71ae9da3ea ']' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.081 [2024-11-26 15:23:50.512804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.081 "name": "raid_bdev1", 00:07:52.081 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:52.081 "strip_size_kb": 0, 00:07:52.081 "state": "online", 00:07:52.081 "raid_level": "raid1", 00:07:52.081 "superblock": true, 00:07:52.081 "num_base_bdevs": 2, 00:07:52.081 "num_base_bdevs_discovered": 1, 00:07:52.081 "num_base_bdevs_operational": 1, 00:07:52.081 "base_bdevs_list": [ 00:07:52.081 { 00:07:52.081 "name": null, 00:07:52.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.081 "is_configured": false, 00:07:52.081 "data_offset": 0, 00:07:52.081 "data_size": 63488 00:07:52.081 }, 00:07:52.081 { 00:07:52.081 "name": "pt2", 00:07:52.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.081 "is_configured": true, 00:07:52.081 "data_offset": 2048, 00:07:52.081 "data_size": 63488 00:07:52.081 } 00:07:52.081 ] 00:07:52.081 }' 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.081 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.650 [2024-11-26 15:23:50.944953] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.650 [2024-11-26 15:23:50.945033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.650 [2024-11-26 15:23:50.945130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.650 [2024-11-26 15:23:50.945202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.650 [2024-11-26 15:23:50.945255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.650 15:23:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.650 [2024-11-26 15:23:51.000942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:52.650 [2024-11-26 15:23:51.001047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.650 [2024-11-26 15:23:51.001066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:52.650 [2024-11-26 15:23:51.001077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.650 [2024-11-26 15:23:51.003260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.650 [2024-11-26 15:23:51.003299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:52.650 [2024-11-26 15:23:51.003368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:52.650 [2024-11-26 15:23:51.003400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.650 [2024-11-26 15:23:51.003478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.650 [2024-11-26 15:23:51.003490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.650 [2024-11-26 15:23:51.003705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:52.650 [2024-11-26 15:23:51.003829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.650 [2024-11-26 15:23:51.003844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:52.650 [2024-11-26 15:23:51.003951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.650 pt2 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.650 "name": "raid_bdev1", 00:07:52.650 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:52.650 "strip_size_kb": 0, 00:07:52.650 "state": "online", 00:07:52.650 "raid_level": "raid1", 00:07:52.650 "superblock": true, 00:07:52.650 "num_base_bdevs": 2, 00:07:52.650 "num_base_bdevs_discovered": 1, 00:07:52.650 "num_base_bdevs_operational": 1, 00:07:52.650 "base_bdevs_list": [ 00:07:52.650 { 00:07:52.650 "name": null, 00:07:52.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.650 "is_configured": false, 00:07:52.650 "data_offset": 2048, 00:07:52.650 "data_size": 63488 00:07:52.650 }, 00:07:52.650 { 00:07:52.650 "name": "pt2", 00:07:52.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.650 "is_configured": true, 00:07:52.650 "data_offset": 2048, 00:07:52.650 "data_size": 63488 00:07:52.650 } 00:07:52.650 ] 00:07:52.650 }' 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.650 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.220 [2024-11-26 15:23:51.393090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.220 [2024-11-26 15:23:51.393171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.220 [2024-11-26 15:23:51.393270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.220 [2024-11-26 15:23:51.393354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.220 [2024-11-26 15:23:51.393396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.220 [2024-11-26 15:23:51.453074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.220 [2024-11-26 15:23:51.453171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.220 [2024-11-26 15:23:51.453241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:53.220 [2024-11-26 15:23:51.453274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.220 [2024-11-26 15:23:51.455395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.220 [2024-11-26 15:23:51.455459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.220 [2024-11-26 15:23:51.455539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:53.220 [2024-11-26 15:23:51.455585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:53.220 [2024-11-26 15:23:51.455684] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:53.220 [2024-11-26 15:23:51.455694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.220 [2024-11-26 15:23:51.455720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:07:53.220 [2024-11-26 15:23:51.455765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:53.220 [2024-11-26 15:23:51.455836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:07:53.220 [2024-11-26 15:23:51.455845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:53.220 [2024-11-26 15:23:51.456066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:53.220 [2024-11-26 15:23:51.456179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:07:53.220 [2024-11-26 15:23:51.456218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:07:53.220 [2024-11-26 15:23:51.456336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.220 pt1 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.220 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.221 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.221 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.221 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.221 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.221 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.221 "name": "raid_bdev1", 00:07:53.221 "uuid": "296fe9fa-114b-4686-aa76-6b71ae9da3ea", 00:07:53.221 "strip_size_kb": 0, 00:07:53.221 "state": "online", 00:07:53.221 "raid_level": "raid1", 00:07:53.221 "superblock": true, 00:07:53.221 "num_base_bdevs": 2, 00:07:53.221 "num_base_bdevs_discovered": 1, 00:07:53.221 "num_base_bdevs_operational": 1, 00:07:53.221 "base_bdevs_list": [ 00:07:53.221 { 00:07:53.221 "name": null, 00:07:53.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.221 "is_configured": false, 00:07:53.221 "data_offset": 2048, 00:07:53.221 "data_size": 63488 00:07:53.221 }, 00:07:53.221 { 00:07:53.221 "name": "pt2", 00:07:53.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:53.221 "is_configured": true, 00:07:53.221 "data_offset": 2048, 00:07:53.221 "data_size": 63488 00:07:53.221 } 00:07:53.221 ] 00:07:53.221 }' 00:07:53.221 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.221 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.480 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.480 [2024-11-26 15:23:51.937443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 296fe9fa-114b-4686-aa76-6b71ae9da3ea '!=' 296fe9fa-114b-4686-aa76-6b71ae9da3ea ']' 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76063 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76063 ']' 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76063 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76063 00:07:53.740 killing process with pid 76063 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76063' 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76063 00:07:53.740 [2024-11-26 15:23:51.998634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.740 [2024-11-26 15:23:51.998716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.740 [2024-11-26 15:23:51.998762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.740 [2024-11-26 15:23:51.998773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:07:53.740 15:23:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76063 00:07:53.740 [2024-11-26 15:23:52.021123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.000 15:23:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:54.000 00:07:54.000 real 0m4.785s 00:07:54.000 user 0m7.811s 00:07:54.000 sys 0m1.018s 00:07:54.000 15:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.000 ************************************ 00:07:54.000 END TEST raid_superblock_test 00:07:54.000 ************************************ 00:07:54.000 15:23:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.000 15:23:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:54.000 15:23:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.000 15:23:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.000 15:23:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.000 ************************************ 00:07:54.000 START TEST raid_read_error_test 00:07:54.000 ************************************ 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:54.000 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.S3RoK7Ia4x 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76382 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76382 00:07:54.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76382 ']' 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.001 15:23:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.001 [2024-11-26 15:23:52.410504] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:54.001 [2024-11-26 15:23:52.410630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76382 ] 00:07:54.261 [2024-11-26 15:23:52.544824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.261 [2024-11-26 15:23:52.582310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.261 [2024-11-26 15:23:52.606962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.261 [2024-11-26 15:23:52.649042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.261 [2024-11-26 15:23:52.649080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.830 BaseBdev1_malloc 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.830 true 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.830 [2024-11-26 15:23:53.256465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.830 [2024-11-26 15:23:53.256543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.830 [2024-11-26 15:23:53.256567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.830 [2024-11-26 15:23:53.256581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.830 [2024-11-26 15:23:53.258682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.830 [2024-11-26 15:23:53.258784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.830 BaseBdev1 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.830 BaseBdev2_malloc 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.830 true 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.830 [2024-11-26 15:23:53.296973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.830 [2024-11-26 15:23:53.297022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.830 [2024-11-26 15:23:53.297037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.830 [2024-11-26 15:23:53.297046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.830 [2024-11-26 15:23:53.299100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.830 [2024-11-26 15:23:53.299135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.830 BaseBdev2 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.830 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.089 [2024-11-26 15:23:53.309004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.089 [2024-11-26 15:23:53.310795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.089 [2024-11-26 15:23:53.310965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:55.089 [2024-11-26 15:23:53.310993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.089 [2024-11-26 15:23:53.311255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:55.089 [2024-11-26 15:23:53.311432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:55.089 [2024-11-26 15:23:53.311480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:55.089 [2024-11-26 15:23:53.311599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.089 "name": "raid_bdev1", 00:07:55.089 "uuid": "494af06b-d2dd-4e31-9ec8-6c5540556a3a", 00:07:55.089 "strip_size_kb": 0, 00:07:55.089 "state": "online", 00:07:55.089 "raid_level": "raid1", 00:07:55.089 "superblock": true, 00:07:55.089 "num_base_bdevs": 2, 00:07:55.089 "num_base_bdevs_discovered": 2, 00:07:55.089 "num_base_bdevs_operational": 2, 00:07:55.089 "base_bdevs_list": [ 00:07:55.089 { 00:07:55.089 "name": "BaseBdev1", 00:07:55.089 "uuid": "ff2c8d32-006f-5049-ace7-68a84ffd7a83", 00:07:55.089 "is_configured": true, 00:07:55.089 "data_offset": 2048, 00:07:55.089 "data_size": 63488 00:07:55.089 }, 00:07:55.089 { 00:07:55.089 "name": "BaseBdev2", 00:07:55.089 "uuid": "5ccfb386-8ef1-5e8a-97b3-cd040a56152f", 00:07:55.089 "is_configured": true, 00:07:55.089 "data_offset": 2048, 00:07:55.089 "data_size": 63488 00:07:55.089 } 00:07:55.089 ] 00:07:55.089 }' 00:07:55.089 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.090 15:23:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.349 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.349 15:23:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.349 [2024-11-26 15:23:53.789511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.287 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.288 15:23:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.547 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.547 "name": "raid_bdev1", 00:07:56.547 "uuid": "494af06b-d2dd-4e31-9ec8-6c5540556a3a", 00:07:56.547 "strip_size_kb": 0, 00:07:56.547 "state": "online", 00:07:56.547 "raid_level": "raid1", 00:07:56.547 "superblock": true, 00:07:56.547 "num_base_bdevs": 2, 00:07:56.547 "num_base_bdevs_discovered": 2, 00:07:56.547 "num_base_bdevs_operational": 2, 00:07:56.547 "base_bdevs_list": [ 00:07:56.547 { 00:07:56.547 "name": "BaseBdev1", 00:07:56.547 "uuid": "ff2c8d32-006f-5049-ace7-68a84ffd7a83", 00:07:56.547 "is_configured": true, 00:07:56.547 "data_offset": 2048, 00:07:56.547 "data_size": 63488 00:07:56.547 }, 00:07:56.547 { 00:07:56.547 "name": "BaseBdev2", 00:07:56.547 "uuid": "5ccfb386-8ef1-5e8a-97b3-cd040a56152f", 00:07:56.547 "is_configured": true, 00:07:56.547 "data_offset": 2048, 00:07:56.547 "data_size": 63488 00:07:56.547 } 00:07:56.547 ] 00:07:56.547 }' 00:07:56.547 15:23:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.547 15:23:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.807 [2024-11-26 15:23:55.167845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.807 [2024-11-26 15:23:55.167950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.807 [2024-11-26 15:23:55.170514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.807 [2024-11-26 15:23:55.170612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.807 [2024-11-26 15:23:55.170715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.807 [2024-11-26 15:23:55.170783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:07:56.807 { 00:07:56.807 "results": [ 00:07:56.807 { 00:07:56.807 "job": "raid_bdev1", 00:07:56.807 "core_mask": "0x1", 00:07:56.807 "workload": "randrw", 00:07:56.807 "percentage": 50, 00:07:56.807 "status": "finished", 00:07:56.807 "queue_depth": 1, 00:07:56.807 "io_size": 131072, 00:07:56.807 "runtime": 1.376509, 00:07:56.807 "iops": 20437.207457415825, 00:07:56.807 "mibps": 2554.650932176978, 00:07:56.807 "io_failed": 0, 00:07:56.807 "io_timeout": 0, 00:07:56.807 "avg_latency_us": 46.487444994311474, 00:07:56.807 "min_latency_us": 21.53229321014556, 00:07:56.807 "max_latency_us": 1392.3472500653709 00:07:56.807 } 00:07:56.807 ], 00:07:56.807 "core_count": 1 00:07:56.807 } 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76382 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76382 ']' 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76382 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76382 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76382' 00:07:56.807 killing process with pid 76382 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76382 00:07:56.807 [2024-11-26 15:23:55.208976] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.807 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76382 00:07:56.807 [2024-11-26 15:23:55.223935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.S3RoK7Ia4x 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:57.067 00:07:57.067 real 0m3.134s 00:07:57.067 user 0m3.961s 00:07:57.067 sys 0m0.495s 00:07:57.067 ************************************ 00:07:57.067 END TEST raid_read_error_test 00:07:57.067 ************************************ 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.067 15:23:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.067 15:23:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:57.067 15:23:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.067 15:23:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.067 15:23:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.067 ************************************ 00:07:57.067 START TEST raid_write_error_test 00:07:57.067 ************************************ 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TScnynIFsG 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76511 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76511 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76511 ']' 00:07:57.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.067 15:23:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.326 [2024-11-26 15:23:55.612269] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:57.326 [2024-11-26 15:23:55.612399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76511 ] 00:07:57.326 [2024-11-26 15:23:55.746331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.326 [2024-11-26 15:23:55.786706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.585 [2024-11-26 15:23:55.813354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.585 [2024-11-26 15:23:55.855739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.585 [2024-11-26 15:23:55.855782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.153 BaseBdev1_malloc 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.153 true 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.153 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.153 [2024-11-26 15:23:56.459323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.153 [2024-11-26 15:23:56.459377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.153 [2024-11-26 15:23:56.459393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.154 [2024-11-26 15:23:56.459405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.154 [2024-11-26 15:23:56.461481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.154 [2024-11-26 15:23:56.461590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.154 BaseBdev1 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.154 BaseBdev2_malloc 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.154 true 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.154 [2024-11-26 15:23:56.499819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.154 [2024-11-26 15:23:56.499866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.154 [2024-11-26 15:23:56.499896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.154 [2024-11-26 15:23:56.499906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.154 [2024-11-26 15:23:56.501959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.154 [2024-11-26 15:23:56.502007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.154 BaseBdev2 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.154 [2024-11-26 15:23:56.511846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.154 [2024-11-26 15:23:56.513693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.154 [2024-11-26 15:23:56.513855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:58.154 [2024-11-26 15:23:56.513868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.154 [2024-11-26 15:23:56.514089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:07:58.154 [2024-11-26 15:23:56.514249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:58.154 [2024-11-26 15:23:56.514264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:07:58.154 [2024-11-26 15:23:56.514389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.154 "name": "raid_bdev1", 00:07:58.154 "uuid": "bf5314f0-0fa6-4145-bb24-204f606d6c72", 00:07:58.154 "strip_size_kb": 0, 00:07:58.154 "state": "online", 00:07:58.154 "raid_level": "raid1", 00:07:58.154 "superblock": true, 00:07:58.154 "num_base_bdevs": 2, 00:07:58.154 "num_base_bdevs_discovered": 2, 00:07:58.154 "num_base_bdevs_operational": 2, 00:07:58.154 "base_bdevs_list": [ 00:07:58.154 { 00:07:58.154 "name": "BaseBdev1", 00:07:58.154 "uuid": "7e5d1c03-50f1-54b8-a71e-a37238e3786b", 00:07:58.154 "is_configured": true, 00:07:58.154 "data_offset": 2048, 00:07:58.154 "data_size": 63488 00:07:58.154 }, 00:07:58.154 { 00:07:58.154 "name": "BaseBdev2", 00:07:58.154 "uuid": "807e8457-3c03-517c-b2bc-2d4d5353b4bb", 00:07:58.154 "is_configured": true, 00:07:58.154 "data_offset": 2048, 00:07:58.154 "data_size": 63488 00:07:58.154 } 00:07:58.154 ] 00:07:58.154 }' 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.154 15:23:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.721 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.721 15:23:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.721 [2024-11-26 15:23:57.008341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.665 [2024-11-26 15:23:57.950328] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:59.665 [2024-11-26 15:23:57.950470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.665 [2024-11-26 15:23:57.950706] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000067d0 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.665 15:23:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.665 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.665 "name": "raid_bdev1", 00:07:59.665 "uuid": "bf5314f0-0fa6-4145-bb24-204f606d6c72", 00:07:59.665 "strip_size_kb": 0, 00:07:59.665 "state": "online", 00:07:59.665 "raid_level": "raid1", 00:07:59.665 "superblock": true, 00:07:59.665 "num_base_bdevs": 2, 00:07:59.665 "num_base_bdevs_discovered": 1, 00:07:59.665 "num_base_bdevs_operational": 1, 00:07:59.665 "base_bdevs_list": [ 00:07:59.665 { 00:07:59.665 "name": null, 00:07:59.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.665 "is_configured": false, 00:07:59.665 "data_offset": 0, 00:07:59.665 "data_size": 63488 00:07:59.665 }, 00:07:59.665 { 00:07:59.665 "name": "BaseBdev2", 00:07:59.665 "uuid": "807e8457-3c03-517c-b2bc-2d4d5353b4bb", 00:07:59.665 "is_configured": true, 00:07:59.665 "data_offset": 2048, 00:07:59.665 "data_size": 63488 00:07:59.665 } 00:07:59.665 ] 00:07:59.665 }' 00:07:59.665 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.665 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.242 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.242 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.242 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.242 [2024-11-26 15:23:58.416140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.242 [2024-11-26 15:23:58.416233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.242 [2024-11-26 15:23:58.418631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.242 [2024-11-26 15:23:58.418685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.242 [2024-11-26 15:23:58.418739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.242 [2024-11-26 15:23:58.418753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:00.242 { 00:08:00.242 "results": [ 00:08:00.242 { 00:08:00.242 "job": "raid_bdev1", 00:08:00.242 "core_mask": "0x1", 00:08:00.242 "workload": "randrw", 00:08:00.242 "percentage": 50, 00:08:00.242 "status": "finished", 00:08:00.243 "queue_depth": 1, 00:08:00.243 "io_size": 131072, 00:08:00.243 "runtime": 1.405927, 00:08:00.243 "iops": 24209.649576400483, 00:08:00.243 "mibps": 3026.2061970500604, 00:08:00.243 "io_failed": 0, 00:08:00.243 "io_timeout": 0, 00:08:00.243 "avg_latency_us": 38.85758326978531, 00:08:00.243 "min_latency_us": 21.53229321014556, 00:08:00.243 "max_latency_us": 1399.4874923733985 00:08:00.243 } 00:08:00.243 ], 00:08:00.243 "core_count": 1 00:08:00.243 } 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76511 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76511 ']' 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76511 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76511 00:08:00.243 killing process with pid 76511 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76511' 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76511 00:08:00.243 [2024-11-26 15:23:58.464152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76511 00:08:00.243 [2024-11-26 15:23:58.479381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TScnynIFsG 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:00.243 ************************************ 00:08:00.243 END TEST raid_write_error_test 00:08:00.243 ************************************ 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:00.243 00:08:00.243 real 0m3.185s 00:08:00.243 user 0m4.049s 00:08:00.243 sys 0m0.505s 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.243 15:23:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.503 15:23:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:00.503 15:23:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.503 15:23:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:00.503 15:23:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.503 15:23:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.503 15:23:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.503 ************************************ 00:08:00.503 START TEST raid_state_function_test 00:08:00.503 ************************************ 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.503 Process raid pid: 76638 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76638 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76638' 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76638 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76638 ']' 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.503 15:23:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.503 [2024-11-26 15:23:58.858572] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:00.503 [2024-11-26 15:23:58.858796] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.762 [2024-11-26 15:23:58.993873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.762 [2024-11-26 15:23:59.033046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.762 [2024-11-26 15:23:59.058317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.762 [2024-11-26 15:23:59.100966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.762 [2024-11-26 15:23:59.101003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.329 [2024-11-26 15:23:59.671433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.329 [2024-11-26 15:23:59.671490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.329 [2024-11-26 15:23:59.671503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.329 [2024-11-26 15:23:59.671510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.329 [2024-11-26 15:23:59.671521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:01.329 [2024-11-26 15:23:59.671529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.329 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.330 "name": "Existed_Raid", 00:08:01.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.330 "strip_size_kb": 64, 00:08:01.330 "state": "configuring", 00:08:01.330 "raid_level": "raid0", 00:08:01.330 "superblock": false, 00:08:01.330 "num_base_bdevs": 3, 00:08:01.330 "num_base_bdevs_discovered": 0, 00:08:01.330 "num_base_bdevs_operational": 3, 00:08:01.330 "base_bdevs_list": [ 00:08:01.330 { 00:08:01.330 "name": "BaseBdev1", 00:08:01.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.330 "is_configured": false, 00:08:01.330 "data_offset": 0, 00:08:01.330 "data_size": 0 00:08:01.330 }, 00:08:01.330 { 00:08:01.330 "name": "BaseBdev2", 00:08:01.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.330 "is_configured": false, 00:08:01.330 "data_offset": 0, 00:08:01.330 "data_size": 0 00:08:01.330 }, 00:08:01.330 { 00:08:01.330 "name": "BaseBdev3", 00:08:01.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.330 "is_configured": false, 00:08:01.330 "data_offset": 0, 00:08:01.330 "data_size": 0 00:08:01.330 } 00:08:01.330 ] 00:08:01.330 }' 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.330 15:23:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.898 [2024-11-26 15:24:00.139463] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.898 [2024-11-26 15:24:00.139544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.898 [2024-11-26 15:24:00.147491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.898 [2024-11-26 15:24:00.147582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.898 [2024-11-26 15:24:00.147611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.898 [2024-11-26 15:24:00.147631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.898 [2024-11-26 15:24:00.147650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:01.898 [2024-11-26 15:24:00.147667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.898 [2024-11-26 15:24:00.164320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.898 BaseBdev1 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.898 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.898 [ 00:08:01.898 { 00:08:01.898 "name": "BaseBdev1", 00:08:01.898 "aliases": [ 00:08:01.898 "c63df6e8-f6a5-4ab8-a378-f5f98b648109" 00:08:01.898 ], 00:08:01.898 "product_name": "Malloc disk", 00:08:01.899 "block_size": 512, 00:08:01.899 "num_blocks": 65536, 00:08:01.899 "uuid": "c63df6e8-f6a5-4ab8-a378-f5f98b648109", 00:08:01.899 "assigned_rate_limits": { 00:08:01.899 "rw_ios_per_sec": 0, 00:08:01.899 "rw_mbytes_per_sec": 0, 00:08:01.899 "r_mbytes_per_sec": 0, 00:08:01.899 "w_mbytes_per_sec": 0 00:08:01.899 }, 00:08:01.899 "claimed": true, 00:08:01.899 "claim_type": "exclusive_write", 00:08:01.899 "zoned": false, 00:08:01.899 "supported_io_types": { 00:08:01.899 "read": true, 00:08:01.899 "write": true, 00:08:01.899 "unmap": true, 00:08:01.899 "flush": true, 00:08:01.899 "reset": true, 00:08:01.899 "nvme_admin": false, 00:08:01.899 "nvme_io": false, 00:08:01.899 "nvme_io_md": false, 00:08:01.899 "write_zeroes": true, 00:08:01.899 "zcopy": true, 00:08:01.899 "get_zone_info": false, 00:08:01.899 "zone_management": false, 00:08:01.899 "zone_append": false, 00:08:01.899 "compare": false, 00:08:01.899 "compare_and_write": false, 00:08:01.899 "abort": true, 00:08:01.899 "seek_hole": false, 00:08:01.899 "seek_data": false, 00:08:01.899 "copy": true, 00:08:01.899 "nvme_iov_md": false 00:08:01.899 }, 00:08:01.899 "memory_domains": [ 00:08:01.899 { 00:08:01.899 "dma_device_id": "system", 00:08:01.899 "dma_device_type": 1 00:08:01.899 }, 00:08:01.899 { 00:08:01.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.899 "dma_device_type": 2 00:08:01.899 } 00:08:01.899 ], 00:08:01.899 "driver_specific": {} 00:08:01.899 } 00:08:01.899 ] 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.899 "name": "Existed_Raid", 00:08:01.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.899 "strip_size_kb": 64, 00:08:01.899 "state": "configuring", 00:08:01.899 "raid_level": "raid0", 00:08:01.899 "superblock": false, 00:08:01.899 "num_base_bdevs": 3, 00:08:01.899 "num_base_bdevs_discovered": 1, 00:08:01.899 "num_base_bdevs_operational": 3, 00:08:01.899 "base_bdevs_list": [ 00:08:01.899 { 00:08:01.899 "name": "BaseBdev1", 00:08:01.899 "uuid": "c63df6e8-f6a5-4ab8-a378-f5f98b648109", 00:08:01.899 "is_configured": true, 00:08:01.899 "data_offset": 0, 00:08:01.899 "data_size": 65536 00:08:01.899 }, 00:08:01.899 { 00:08:01.899 "name": "BaseBdev2", 00:08:01.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.899 "is_configured": false, 00:08:01.899 "data_offset": 0, 00:08:01.899 "data_size": 0 00:08:01.899 }, 00:08:01.899 { 00:08:01.899 "name": "BaseBdev3", 00:08:01.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.899 "is_configured": false, 00:08:01.899 "data_offset": 0, 00:08:01.899 "data_size": 0 00:08:01.899 } 00:08:01.899 ] 00:08:01.899 }' 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.899 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.469 [2024-11-26 15:24:00.640465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.469 [2024-11-26 15:24:00.640518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.469 [2024-11-26 15:24:00.652516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.469 [2024-11-26 15:24:00.654303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.469 [2024-11-26 15:24:00.654341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.469 [2024-11-26 15:24:00.654354] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:02.469 [2024-11-26 15:24:00.654361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.469 "name": "Existed_Raid", 00:08:02.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.469 "strip_size_kb": 64, 00:08:02.469 "state": "configuring", 00:08:02.469 "raid_level": "raid0", 00:08:02.469 "superblock": false, 00:08:02.469 "num_base_bdevs": 3, 00:08:02.469 "num_base_bdevs_discovered": 1, 00:08:02.469 "num_base_bdevs_operational": 3, 00:08:02.469 "base_bdevs_list": [ 00:08:02.469 { 00:08:02.469 "name": "BaseBdev1", 00:08:02.469 "uuid": "c63df6e8-f6a5-4ab8-a378-f5f98b648109", 00:08:02.469 "is_configured": true, 00:08:02.469 "data_offset": 0, 00:08:02.469 "data_size": 65536 00:08:02.469 }, 00:08:02.469 { 00:08:02.469 "name": "BaseBdev2", 00:08:02.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.469 "is_configured": false, 00:08:02.469 "data_offset": 0, 00:08:02.469 "data_size": 0 00:08:02.469 }, 00:08:02.469 { 00:08:02.469 "name": "BaseBdev3", 00:08:02.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.469 "is_configured": false, 00:08:02.469 "data_offset": 0, 00:08:02.469 "data_size": 0 00:08:02.469 } 00:08:02.469 ] 00:08:02.469 }' 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.469 15:24:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.729 [2024-11-26 15:24:01.055598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.729 BaseBdev2 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.729 [ 00:08:02.729 { 00:08:02.729 "name": "BaseBdev2", 00:08:02.729 "aliases": [ 00:08:02.729 "627783d7-8499-4b62-a73e-3495aea37ab9" 00:08:02.729 ], 00:08:02.729 "product_name": "Malloc disk", 00:08:02.729 "block_size": 512, 00:08:02.729 "num_blocks": 65536, 00:08:02.729 "uuid": "627783d7-8499-4b62-a73e-3495aea37ab9", 00:08:02.729 "assigned_rate_limits": { 00:08:02.729 "rw_ios_per_sec": 0, 00:08:02.729 "rw_mbytes_per_sec": 0, 00:08:02.729 "r_mbytes_per_sec": 0, 00:08:02.729 "w_mbytes_per_sec": 0 00:08:02.729 }, 00:08:02.729 "claimed": true, 00:08:02.729 "claim_type": "exclusive_write", 00:08:02.729 "zoned": false, 00:08:02.729 "supported_io_types": { 00:08:02.729 "read": true, 00:08:02.729 "write": true, 00:08:02.729 "unmap": true, 00:08:02.729 "flush": true, 00:08:02.729 "reset": true, 00:08:02.729 "nvme_admin": false, 00:08:02.729 "nvme_io": false, 00:08:02.729 "nvme_io_md": false, 00:08:02.729 "write_zeroes": true, 00:08:02.729 "zcopy": true, 00:08:02.729 "get_zone_info": false, 00:08:02.729 "zone_management": false, 00:08:02.729 "zone_append": false, 00:08:02.729 "compare": false, 00:08:02.729 "compare_and_write": false, 00:08:02.729 "abort": true, 00:08:02.729 "seek_hole": false, 00:08:02.729 "seek_data": false, 00:08:02.729 "copy": true, 00:08:02.729 "nvme_iov_md": false 00:08:02.729 }, 00:08:02.729 "memory_domains": [ 00:08:02.729 { 00:08:02.729 "dma_device_id": "system", 00:08:02.729 "dma_device_type": 1 00:08:02.729 }, 00:08:02.729 { 00:08:02.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.729 "dma_device_type": 2 00:08:02.729 } 00:08:02.729 ], 00:08:02.729 "driver_specific": {} 00:08:02.729 } 00:08:02.729 ] 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.729 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.730 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.730 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.730 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.730 "name": "Existed_Raid", 00:08:02.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.730 "strip_size_kb": 64, 00:08:02.730 "state": "configuring", 00:08:02.730 "raid_level": "raid0", 00:08:02.730 "superblock": false, 00:08:02.730 "num_base_bdevs": 3, 00:08:02.730 "num_base_bdevs_discovered": 2, 00:08:02.730 "num_base_bdevs_operational": 3, 00:08:02.730 "base_bdevs_list": [ 00:08:02.730 { 00:08:02.730 "name": "BaseBdev1", 00:08:02.730 "uuid": "c63df6e8-f6a5-4ab8-a378-f5f98b648109", 00:08:02.730 "is_configured": true, 00:08:02.730 "data_offset": 0, 00:08:02.730 "data_size": 65536 00:08:02.730 }, 00:08:02.730 { 00:08:02.730 "name": "BaseBdev2", 00:08:02.730 "uuid": "627783d7-8499-4b62-a73e-3495aea37ab9", 00:08:02.730 "is_configured": true, 00:08:02.730 "data_offset": 0, 00:08:02.730 "data_size": 65536 00:08:02.730 }, 00:08:02.730 { 00:08:02.730 "name": "BaseBdev3", 00:08:02.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.730 "is_configured": false, 00:08:02.730 "data_offset": 0, 00:08:02.730 "data_size": 0 00:08:02.730 } 00:08:02.730 ] 00:08:02.730 }' 00:08:02.730 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.730 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.300 [2024-11-26 15:24:01.546370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.300 [2024-11-26 15:24:01.546641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:03.300 [2024-11-26 15:24:01.546745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:03.300 [2024-11-26 15:24:01.547937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:03.300 [2024-11-26 15:24:01.548544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:03.300 [2024-11-26 15:24:01.548711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:03.300 BaseBdev3 00:08:03.300 [2024-11-26 15:24:01.549636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.300 [ 00:08:03.300 { 00:08:03.300 "name": "BaseBdev3", 00:08:03.300 "aliases": [ 00:08:03.300 "972e8977-be1d-4123-9902-9eb00a1869df" 00:08:03.300 ], 00:08:03.300 "product_name": "Malloc disk", 00:08:03.300 "block_size": 512, 00:08:03.300 "num_blocks": 65536, 00:08:03.300 "uuid": "972e8977-be1d-4123-9902-9eb00a1869df", 00:08:03.300 "assigned_rate_limits": { 00:08:03.300 "rw_ios_per_sec": 0, 00:08:03.300 "rw_mbytes_per_sec": 0, 00:08:03.300 "r_mbytes_per_sec": 0, 00:08:03.300 "w_mbytes_per_sec": 0 00:08:03.300 }, 00:08:03.300 "claimed": true, 00:08:03.300 "claim_type": "exclusive_write", 00:08:03.300 "zoned": false, 00:08:03.300 "supported_io_types": { 00:08:03.300 "read": true, 00:08:03.300 "write": true, 00:08:03.300 "unmap": true, 00:08:03.300 "flush": true, 00:08:03.300 "reset": true, 00:08:03.300 "nvme_admin": false, 00:08:03.300 "nvme_io": false, 00:08:03.300 "nvme_io_md": false, 00:08:03.300 "write_zeroes": true, 00:08:03.300 "zcopy": true, 00:08:03.300 "get_zone_info": false, 00:08:03.300 "zone_management": false, 00:08:03.300 "zone_append": false, 00:08:03.300 "compare": false, 00:08:03.300 "compare_and_write": false, 00:08:03.300 "abort": true, 00:08:03.300 "seek_hole": false, 00:08:03.300 "seek_data": false, 00:08:03.300 "copy": true, 00:08:03.300 "nvme_iov_md": false 00:08:03.300 }, 00:08:03.300 "memory_domains": [ 00:08:03.300 { 00:08:03.300 "dma_device_id": "system", 00:08:03.300 "dma_device_type": 1 00:08:03.300 }, 00:08:03.300 { 00:08:03.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.300 "dma_device_type": 2 00:08:03.300 } 00:08:03.300 ], 00:08:03.300 "driver_specific": {} 00:08:03.300 } 00:08:03.300 ] 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.300 "name": "Existed_Raid", 00:08:03.300 "uuid": "deedbbf4-7d69-4f22-b421-d95d2218947a", 00:08:03.300 "strip_size_kb": 64, 00:08:03.300 "state": "online", 00:08:03.300 "raid_level": "raid0", 00:08:03.300 "superblock": false, 00:08:03.300 "num_base_bdevs": 3, 00:08:03.300 "num_base_bdevs_discovered": 3, 00:08:03.300 "num_base_bdevs_operational": 3, 00:08:03.300 "base_bdevs_list": [ 00:08:03.300 { 00:08:03.300 "name": "BaseBdev1", 00:08:03.300 "uuid": "c63df6e8-f6a5-4ab8-a378-f5f98b648109", 00:08:03.300 "is_configured": true, 00:08:03.300 "data_offset": 0, 00:08:03.300 "data_size": 65536 00:08:03.300 }, 00:08:03.300 { 00:08:03.300 "name": "BaseBdev2", 00:08:03.300 "uuid": "627783d7-8499-4b62-a73e-3495aea37ab9", 00:08:03.300 "is_configured": true, 00:08:03.300 "data_offset": 0, 00:08:03.300 "data_size": 65536 00:08:03.300 }, 00:08:03.300 { 00:08:03.300 "name": "BaseBdev3", 00:08:03.300 "uuid": "972e8977-be1d-4123-9902-9eb00a1869df", 00:08:03.300 "is_configured": true, 00:08:03.300 "data_offset": 0, 00:08:03.300 "data_size": 65536 00:08:03.300 } 00:08:03.300 ] 00:08:03.300 }' 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.300 15:24:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.560 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.560 [2024-11-26 15:24:02.018724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.820 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.820 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.820 "name": "Existed_Raid", 00:08:03.820 "aliases": [ 00:08:03.820 "deedbbf4-7d69-4f22-b421-d95d2218947a" 00:08:03.820 ], 00:08:03.820 "product_name": "Raid Volume", 00:08:03.820 "block_size": 512, 00:08:03.820 "num_blocks": 196608, 00:08:03.820 "uuid": "deedbbf4-7d69-4f22-b421-d95d2218947a", 00:08:03.820 "assigned_rate_limits": { 00:08:03.820 "rw_ios_per_sec": 0, 00:08:03.820 "rw_mbytes_per_sec": 0, 00:08:03.820 "r_mbytes_per_sec": 0, 00:08:03.820 "w_mbytes_per_sec": 0 00:08:03.820 }, 00:08:03.820 "claimed": false, 00:08:03.820 "zoned": false, 00:08:03.820 "supported_io_types": { 00:08:03.820 "read": true, 00:08:03.820 "write": true, 00:08:03.820 "unmap": true, 00:08:03.820 "flush": true, 00:08:03.820 "reset": true, 00:08:03.821 "nvme_admin": false, 00:08:03.821 "nvme_io": false, 00:08:03.821 "nvme_io_md": false, 00:08:03.821 "write_zeroes": true, 00:08:03.821 "zcopy": false, 00:08:03.821 "get_zone_info": false, 00:08:03.821 "zone_management": false, 00:08:03.821 "zone_append": false, 00:08:03.821 "compare": false, 00:08:03.821 "compare_and_write": false, 00:08:03.821 "abort": false, 00:08:03.821 "seek_hole": false, 00:08:03.821 "seek_data": false, 00:08:03.821 "copy": false, 00:08:03.821 "nvme_iov_md": false 00:08:03.821 }, 00:08:03.821 "memory_domains": [ 00:08:03.821 { 00:08:03.821 "dma_device_id": "system", 00:08:03.821 "dma_device_type": 1 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.821 "dma_device_type": 2 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "dma_device_id": "system", 00:08:03.821 "dma_device_type": 1 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.821 "dma_device_type": 2 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "dma_device_id": "system", 00:08:03.821 "dma_device_type": 1 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.821 "dma_device_type": 2 00:08:03.821 } 00:08:03.821 ], 00:08:03.821 "driver_specific": { 00:08:03.821 "raid": { 00:08:03.821 "uuid": "deedbbf4-7d69-4f22-b421-d95d2218947a", 00:08:03.821 "strip_size_kb": 64, 00:08:03.821 "state": "online", 00:08:03.821 "raid_level": "raid0", 00:08:03.821 "superblock": false, 00:08:03.821 "num_base_bdevs": 3, 00:08:03.821 "num_base_bdevs_discovered": 3, 00:08:03.821 "num_base_bdevs_operational": 3, 00:08:03.821 "base_bdevs_list": [ 00:08:03.821 { 00:08:03.821 "name": "BaseBdev1", 00:08:03.821 "uuid": "c63df6e8-f6a5-4ab8-a378-f5f98b648109", 00:08:03.821 "is_configured": true, 00:08:03.821 "data_offset": 0, 00:08:03.821 "data_size": 65536 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "name": "BaseBdev2", 00:08:03.821 "uuid": "627783d7-8499-4b62-a73e-3495aea37ab9", 00:08:03.821 "is_configured": true, 00:08:03.821 "data_offset": 0, 00:08:03.821 "data_size": 65536 00:08:03.821 }, 00:08:03.821 { 00:08:03.821 "name": "BaseBdev3", 00:08:03.821 "uuid": "972e8977-be1d-4123-9902-9eb00a1869df", 00:08:03.821 "is_configured": true, 00:08:03.821 "data_offset": 0, 00:08:03.821 "data_size": 65536 00:08:03.821 } 00:08:03.821 ] 00:08:03.821 } 00:08:03.821 } 00:08:03.821 }' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.821 BaseBdev2 00:08:03.821 BaseBdev3' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.821 [2024-11-26 15:24:02.262583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.821 [2024-11-26 15:24:02.262655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.821 [2024-11-26 15:24:02.262723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.821 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.081 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.081 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.081 "name": "Existed_Raid", 00:08:04.081 "uuid": "deedbbf4-7d69-4f22-b421-d95d2218947a", 00:08:04.081 "strip_size_kb": 64, 00:08:04.081 "state": "offline", 00:08:04.081 "raid_level": "raid0", 00:08:04.081 "superblock": false, 00:08:04.081 "num_base_bdevs": 3, 00:08:04.081 "num_base_bdevs_discovered": 2, 00:08:04.081 "num_base_bdevs_operational": 2, 00:08:04.081 "base_bdevs_list": [ 00:08:04.081 { 00:08:04.081 "name": null, 00:08:04.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.081 "is_configured": false, 00:08:04.081 "data_offset": 0, 00:08:04.081 "data_size": 65536 00:08:04.081 }, 00:08:04.081 { 00:08:04.081 "name": "BaseBdev2", 00:08:04.081 "uuid": "627783d7-8499-4b62-a73e-3495aea37ab9", 00:08:04.081 "is_configured": true, 00:08:04.081 "data_offset": 0, 00:08:04.081 "data_size": 65536 00:08:04.081 }, 00:08:04.081 { 00:08:04.081 "name": "BaseBdev3", 00:08:04.081 "uuid": "972e8977-be1d-4123-9902-9eb00a1869df", 00:08:04.081 "is_configured": true, 00:08:04.081 "data_offset": 0, 00:08:04.081 "data_size": 65536 00:08:04.081 } 00:08:04.081 ] 00:08:04.081 }' 00:08:04.081 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.081 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.341 [2024-11-26 15:24:02.758022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.341 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.341 [2024-11-26 15:24:02.813263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:04.341 [2024-11-26 15:24:02.813313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.601 BaseBdev2 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.601 [ 00:08:04.601 { 00:08:04.601 "name": "BaseBdev2", 00:08:04.601 "aliases": [ 00:08:04.601 "eea4cc54-3973-40ba-9627-2cf0f1f54331" 00:08:04.601 ], 00:08:04.601 "product_name": "Malloc disk", 00:08:04.601 "block_size": 512, 00:08:04.601 "num_blocks": 65536, 00:08:04.601 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:04.601 "assigned_rate_limits": { 00:08:04.601 "rw_ios_per_sec": 0, 00:08:04.601 "rw_mbytes_per_sec": 0, 00:08:04.601 "r_mbytes_per_sec": 0, 00:08:04.601 "w_mbytes_per_sec": 0 00:08:04.601 }, 00:08:04.601 "claimed": false, 00:08:04.601 "zoned": false, 00:08:04.601 "supported_io_types": { 00:08:04.601 "read": true, 00:08:04.601 "write": true, 00:08:04.601 "unmap": true, 00:08:04.601 "flush": true, 00:08:04.601 "reset": true, 00:08:04.601 "nvme_admin": false, 00:08:04.601 "nvme_io": false, 00:08:04.601 "nvme_io_md": false, 00:08:04.601 "write_zeroes": true, 00:08:04.601 "zcopy": true, 00:08:04.601 "get_zone_info": false, 00:08:04.601 "zone_management": false, 00:08:04.601 "zone_append": false, 00:08:04.601 "compare": false, 00:08:04.601 "compare_and_write": false, 00:08:04.601 "abort": true, 00:08:04.601 "seek_hole": false, 00:08:04.601 "seek_data": false, 00:08:04.601 "copy": true, 00:08:04.601 "nvme_iov_md": false 00:08:04.601 }, 00:08:04.601 "memory_domains": [ 00:08:04.601 { 00:08:04.601 "dma_device_id": "system", 00:08:04.601 "dma_device_type": 1 00:08:04.601 }, 00:08:04.601 { 00:08:04.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.601 "dma_device_type": 2 00:08:04.601 } 00:08:04.601 ], 00:08:04.601 "driver_specific": {} 00:08:04.601 } 00:08:04.601 ] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.601 BaseBdev3 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.601 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.601 [ 00:08:04.601 { 00:08:04.601 "name": "BaseBdev3", 00:08:04.601 "aliases": [ 00:08:04.601 "fd88e818-e80e-4fe5-a601-bb61a047ae5d" 00:08:04.601 ], 00:08:04.601 "product_name": "Malloc disk", 00:08:04.601 "block_size": 512, 00:08:04.601 "num_blocks": 65536, 00:08:04.601 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:04.601 "assigned_rate_limits": { 00:08:04.601 "rw_ios_per_sec": 0, 00:08:04.601 "rw_mbytes_per_sec": 0, 00:08:04.601 "r_mbytes_per_sec": 0, 00:08:04.601 "w_mbytes_per_sec": 0 00:08:04.601 }, 00:08:04.601 "claimed": false, 00:08:04.601 "zoned": false, 00:08:04.601 "supported_io_types": { 00:08:04.601 "read": true, 00:08:04.601 "write": true, 00:08:04.601 "unmap": true, 00:08:04.601 "flush": true, 00:08:04.601 "reset": true, 00:08:04.601 "nvme_admin": false, 00:08:04.601 "nvme_io": false, 00:08:04.601 "nvme_io_md": false, 00:08:04.601 "write_zeroes": true, 00:08:04.601 "zcopy": true, 00:08:04.601 "get_zone_info": false, 00:08:04.601 "zone_management": false, 00:08:04.601 "zone_append": false, 00:08:04.601 "compare": false, 00:08:04.601 "compare_and_write": false, 00:08:04.601 "abort": true, 00:08:04.601 "seek_hole": false, 00:08:04.601 "seek_data": false, 00:08:04.601 "copy": true, 00:08:04.601 "nvme_iov_md": false 00:08:04.601 }, 00:08:04.601 "memory_domains": [ 00:08:04.601 { 00:08:04.601 "dma_device_id": "system", 00:08:04.601 "dma_device_type": 1 00:08:04.601 }, 00:08:04.601 { 00:08:04.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.601 "dma_device_type": 2 00:08:04.601 } 00:08:04.601 ], 00:08:04.601 "driver_specific": {} 00:08:04.601 } 00:08:04.601 ] 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.602 [2024-11-26 15:24:02.985143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.602 [2024-11-26 15:24:02.985201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.602 [2024-11-26 15:24:02.985238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.602 [2024-11-26 15:24:02.987056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.602 15:24:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.602 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.602 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.602 "name": "Existed_Raid", 00:08:04.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.602 "strip_size_kb": 64, 00:08:04.602 "state": "configuring", 00:08:04.602 "raid_level": "raid0", 00:08:04.602 "superblock": false, 00:08:04.602 "num_base_bdevs": 3, 00:08:04.602 "num_base_bdevs_discovered": 2, 00:08:04.602 "num_base_bdevs_operational": 3, 00:08:04.602 "base_bdevs_list": [ 00:08:04.602 { 00:08:04.602 "name": "BaseBdev1", 00:08:04.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.602 "is_configured": false, 00:08:04.602 "data_offset": 0, 00:08:04.602 "data_size": 0 00:08:04.602 }, 00:08:04.602 { 00:08:04.602 "name": "BaseBdev2", 00:08:04.602 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:04.602 "is_configured": true, 00:08:04.602 "data_offset": 0, 00:08:04.602 "data_size": 65536 00:08:04.602 }, 00:08:04.602 { 00:08:04.602 "name": "BaseBdev3", 00:08:04.602 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:04.602 "is_configured": true, 00:08:04.602 "data_offset": 0, 00:08:04.602 "data_size": 65536 00:08:04.602 } 00:08:04.602 ] 00:08:04.602 }' 00:08:04.602 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.602 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.172 [2024-11-26 15:24:03.385226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.172 "name": "Existed_Raid", 00:08:05.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.172 "strip_size_kb": 64, 00:08:05.172 "state": "configuring", 00:08:05.172 "raid_level": "raid0", 00:08:05.172 "superblock": false, 00:08:05.172 "num_base_bdevs": 3, 00:08:05.172 "num_base_bdevs_discovered": 1, 00:08:05.172 "num_base_bdevs_operational": 3, 00:08:05.172 "base_bdevs_list": [ 00:08:05.172 { 00:08:05.172 "name": "BaseBdev1", 00:08:05.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.172 "is_configured": false, 00:08:05.172 "data_offset": 0, 00:08:05.172 "data_size": 0 00:08:05.172 }, 00:08:05.172 { 00:08:05.172 "name": null, 00:08:05.172 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:05.172 "is_configured": false, 00:08:05.172 "data_offset": 0, 00:08:05.172 "data_size": 65536 00:08:05.172 }, 00:08:05.172 { 00:08:05.172 "name": "BaseBdev3", 00:08:05.172 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:05.172 "is_configured": true, 00:08:05.172 "data_offset": 0, 00:08:05.172 "data_size": 65536 00:08:05.172 } 00:08:05.172 ] 00:08:05.172 }' 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.172 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.432 BaseBdev1 00:08:05.432 [2024-11-26 15:24:03.872275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.432 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.432 [ 00:08:05.432 { 00:08:05.432 "name": "BaseBdev1", 00:08:05.432 "aliases": [ 00:08:05.432 "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e" 00:08:05.432 ], 00:08:05.432 "product_name": "Malloc disk", 00:08:05.432 "block_size": 512, 00:08:05.432 "num_blocks": 65536, 00:08:05.432 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:05.432 "assigned_rate_limits": { 00:08:05.432 "rw_ios_per_sec": 0, 00:08:05.432 "rw_mbytes_per_sec": 0, 00:08:05.432 "r_mbytes_per_sec": 0, 00:08:05.432 "w_mbytes_per_sec": 0 00:08:05.432 }, 00:08:05.432 "claimed": true, 00:08:05.432 "claim_type": "exclusive_write", 00:08:05.432 "zoned": false, 00:08:05.432 "supported_io_types": { 00:08:05.432 "read": true, 00:08:05.432 "write": true, 00:08:05.432 "unmap": true, 00:08:05.432 "flush": true, 00:08:05.432 "reset": true, 00:08:05.432 "nvme_admin": false, 00:08:05.432 "nvme_io": false, 00:08:05.432 "nvme_io_md": false, 00:08:05.432 "write_zeroes": true, 00:08:05.432 "zcopy": true, 00:08:05.432 "get_zone_info": false, 00:08:05.432 "zone_management": false, 00:08:05.432 "zone_append": false, 00:08:05.432 "compare": false, 00:08:05.691 "compare_and_write": false, 00:08:05.691 "abort": true, 00:08:05.691 "seek_hole": false, 00:08:05.691 "seek_data": false, 00:08:05.691 "copy": true, 00:08:05.691 "nvme_iov_md": false 00:08:05.691 }, 00:08:05.691 "memory_domains": [ 00:08:05.691 { 00:08:05.691 "dma_device_id": "system", 00:08:05.691 "dma_device_type": 1 00:08:05.691 }, 00:08:05.691 { 00:08:05.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.691 "dma_device_type": 2 00:08:05.691 } 00:08:05.691 ], 00:08:05.692 "driver_specific": {} 00:08:05.692 } 00:08:05.692 ] 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.692 "name": "Existed_Raid", 00:08:05.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.692 "strip_size_kb": 64, 00:08:05.692 "state": "configuring", 00:08:05.692 "raid_level": "raid0", 00:08:05.692 "superblock": false, 00:08:05.692 "num_base_bdevs": 3, 00:08:05.692 "num_base_bdevs_discovered": 2, 00:08:05.692 "num_base_bdevs_operational": 3, 00:08:05.692 "base_bdevs_list": [ 00:08:05.692 { 00:08:05.692 "name": "BaseBdev1", 00:08:05.692 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:05.692 "is_configured": true, 00:08:05.692 "data_offset": 0, 00:08:05.692 "data_size": 65536 00:08:05.692 }, 00:08:05.692 { 00:08:05.692 "name": null, 00:08:05.692 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:05.692 "is_configured": false, 00:08:05.692 "data_offset": 0, 00:08:05.692 "data_size": 65536 00:08:05.692 }, 00:08:05.692 { 00:08:05.692 "name": "BaseBdev3", 00:08:05.692 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:05.692 "is_configured": true, 00:08:05.692 "data_offset": 0, 00:08:05.692 "data_size": 65536 00:08:05.692 } 00:08:05.692 ] 00:08:05.692 }' 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.692 15:24:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.952 [2024-11-26 15:24:04.388460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.952 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.212 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.212 "name": "Existed_Raid", 00:08:06.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.212 "strip_size_kb": 64, 00:08:06.212 "state": "configuring", 00:08:06.212 "raid_level": "raid0", 00:08:06.212 "superblock": false, 00:08:06.212 "num_base_bdevs": 3, 00:08:06.212 "num_base_bdevs_discovered": 1, 00:08:06.212 "num_base_bdevs_operational": 3, 00:08:06.212 "base_bdevs_list": [ 00:08:06.212 { 00:08:06.212 "name": "BaseBdev1", 00:08:06.212 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:06.212 "is_configured": true, 00:08:06.212 "data_offset": 0, 00:08:06.212 "data_size": 65536 00:08:06.212 }, 00:08:06.212 { 00:08:06.212 "name": null, 00:08:06.212 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:06.212 "is_configured": false, 00:08:06.212 "data_offset": 0, 00:08:06.213 "data_size": 65536 00:08:06.213 }, 00:08:06.213 { 00:08:06.213 "name": null, 00:08:06.213 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:06.213 "is_configured": false, 00:08:06.213 "data_offset": 0, 00:08:06.213 "data_size": 65536 00:08:06.213 } 00:08:06.213 ] 00:08:06.213 }' 00:08:06.213 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.213 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 [2024-11-26 15:24:04.892640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.472 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.473 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.732 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.732 "name": "Existed_Raid", 00:08:06.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.732 "strip_size_kb": 64, 00:08:06.732 "state": "configuring", 00:08:06.732 "raid_level": "raid0", 00:08:06.732 "superblock": false, 00:08:06.732 "num_base_bdevs": 3, 00:08:06.732 "num_base_bdevs_discovered": 2, 00:08:06.732 "num_base_bdevs_operational": 3, 00:08:06.732 "base_bdevs_list": [ 00:08:06.732 { 00:08:06.732 "name": "BaseBdev1", 00:08:06.732 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:06.732 "is_configured": true, 00:08:06.732 "data_offset": 0, 00:08:06.732 "data_size": 65536 00:08:06.732 }, 00:08:06.732 { 00:08:06.732 "name": null, 00:08:06.732 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:06.732 "is_configured": false, 00:08:06.732 "data_offset": 0, 00:08:06.732 "data_size": 65536 00:08:06.732 }, 00:08:06.732 { 00:08:06.732 "name": "BaseBdev3", 00:08:06.732 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:06.732 "is_configured": true, 00:08:06.732 "data_offset": 0, 00:08:06.732 "data_size": 65536 00:08:06.732 } 00:08:06.732 ] 00:08:06.732 }' 00:08:06.732 15:24:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.732 15:24:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 [2024-11-26 15:24:05.364780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.992 "name": "Existed_Raid", 00:08:06.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.992 "strip_size_kb": 64, 00:08:06.992 "state": "configuring", 00:08:06.992 "raid_level": "raid0", 00:08:06.992 "superblock": false, 00:08:06.992 "num_base_bdevs": 3, 00:08:06.992 "num_base_bdevs_discovered": 1, 00:08:06.992 "num_base_bdevs_operational": 3, 00:08:06.992 "base_bdevs_list": [ 00:08:06.992 { 00:08:06.992 "name": null, 00:08:06.992 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:06.992 "is_configured": false, 00:08:06.992 "data_offset": 0, 00:08:06.992 "data_size": 65536 00:08:06.992 }, 00:08:06.992 { 00:08:06.992 "name": null, 00:08:06.992 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:06.992 "is_configured": false, 00:08:06.992 "data_offset": 0, 00:08:06.992 "data_size": 65536 00:08:06.992 }, 00:08:06.992 { 00:08:06.992 "name": "BaseBdev3", 00:08:06.992 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:06.992 "is_configured": true, 00:08:06.992 "data_offset": 0, 00:08:06.992 "data_size": 65536 00:08:06.992 } 00:08:06.992 ] 00:08:06.992 }' 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.992 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.561 [2024-11-26 15:24:05.819324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.561 "name": "Existed_Raid", 00:08:07.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.561 "strip_size_kb": 64, 00:08:07.561 "state": "configuring", 00:08:07.561 "raid_level": "raid0", 00:08:07.561 "superblock": false, 00:08:07.561 "num_base_bdevs": 3, 00:08:07.561 "num_base_bdevs_discovered": 2, 00:08:07.561 "num_base_bdevs_operational": 3, 00:08:07.561 "base_bdevs_list": [ 00:08:07.561 { 00:08:07.561 "name": null, 00:08:07.561 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:07.561 "is_configured": false, 00:08:07.561 "data_offset": 0, 00:08:07.561 "data_size": 65536 00:08:07.561 }, 00:08:07.561 { 00:08:07.561 "name": "BaseBdev2", 00:08:07.561 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:07.561 "is_configured": true, 00:08:07.561 "data_offset": 0, 00:08:07.561 "data_size": 65536 00:08:07.561 }, 00:08:07.561 { 00:08:07.561 "name": "BaseBdev3", 00:08:07.561 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:07.561 "is_configured": true, 00:08:07.561 "data_offset": 0, 00:08:07.561 "data_size": 65536 00:08:07.561 } 00:08:07.561 ] 00:08:07.561 }' 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.561 15:24:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.821 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.821 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.821 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.821 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:07.821 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 [2024-11-26 15:24:06.358464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:08.086 [2024-11-26 15:24:06.358581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:08.086 [2024-11-26 15:24:06.358622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:08.086 [2024-11-26 15:24:06.358884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:08.086 [2024-11-26 15:24:06.359039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:08.086 [2024-11-26 15:24:06.359083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:08.086 [2024-11-26 15:24:06.359296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.086 NewBaseBdev 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 [ 00:08:08.086 { 00:08:08.086 "name": "NewBaseBdev", 00:08:08.086 "aliases": [ 00:08:08.086 "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e" 00:08:08.086 ], 00:08:08.086 "product_name": "Malloc disk", 00:08:08.086 "block_size": 512, 00:08:08.086 "num_blocks": 65536, 00:08:08.086 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:08.086 "assigned_rate_limits": { 00:08:08.086 "rw_ios_per_sec": 0, 00:08:08.086 "rw_mbytes_per_sec": 0, 00:08:08.086 "r_mbytes_per_sec": 0, 00:08:08.086 "w_mbytes_per_sec": 0 00:08:08.086 }, 00:08:08.086 "claimed": true, 00:08:08.086 "claim_type": "exclusive_write", 00:08:08.086 "zoned": false, 00:08:08.086 "supported_io_types": { 00:08:08.086 "read": true, 00:08:08.086 "write": true, 00:08:08.086 "unmap": true, 00:08:08.086 "flush": true, 00:08:08.086 "reset": true, 00:08:08.086 "nvme_admin": false, 00:08:08.086 "nvme_io": false, 00:08:08.086 "nvme_io_md": false, 00:08:08.086 "write_zeroes": true, 00:08:08.086 "zcopy": true, 00:08:08.086 "get_zone_info": false, 00:08:08.086 "zone_management": false, 00:08:08.086 "zone_append": false, 00:08:08.086 "compare": false, 00:08:08.086 "compare_and_write": false, 00:08:08.086 "abort": true, 00:08:08.086 "seek_hole": false, 00:08:08.086 "seek_data": false, 00:08:08.086 "copy": true, 00:08:08.086 "nvme_iov_md": false 00:08:08.086 }, 00:08:08.086 "memory_domains": [ 00:08:08.086 { 00:08:08.086 "dma_device_id": "system", 00:08:08.086 "dma_device_type": 1 00:08:08.086 }, 00:08:08.086 { 00:08:08.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.086 "dma_device_type": 2 00:08:08.086 } 00:08:08.086 ], 00:08:08.086 "driver_specific": {} 00:08:08.086 } 00:08:08.086 ] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.086 "name": "Existed_Raid", 00:08:08.086 "uuid": "d45c7e82-7b8b-4fd0-ab4f-ccae0b3a3324", 00:08:08.086 "strip_size_kb": 64, 00:08:08.086 "state": "online", 00:08:08.086 "raid_level": "raid0", 00:08:08.086 "superblock": false, 00:08:08.086 "num_base_bdevs": 3, 00:08:08.086 "num_base_bdevs_discovered": 3, 00:08:08.086 "num_base_bdevs_operational": 3, 00:08:08.086 "base_bdevs_list": [ 00:08:08.086 { 00:08:08.086 "name": "NewBaseBdev", 00:08:08.086 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:08.086 "is_configured": true, 00:08:08.086 "data_offset": 0, 00:08:08.086 "data_size": 65536 00:08:08.086 }, 00:08:08.086 { 00:08:08.086 "name": "BaseBdev2", 00:08:08.086 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:08.086 "is_configured": true, 00:08:08.086 "data_offset": 0, 00:08:08.086 "data_size": 65536 00:08:08.086 }, 00:08:08.086 { 00:08:08.086 "name": "BaseBdev3", 00:08:08.086 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:08.086 "is_configured": true, 00:08:08.086 "data_offset": 0, 00:08:08.086 "data_size": 65536 00:08:08.086 } 00:08:08.086 ] 00:08:08.086 }' 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.086 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.354 [2024-11-26 15:24:06.786917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.354 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.354 "name": "Existed_Raid", 00:08:08.354 "aliases": [ 00:08:08.354 "d45c7e82-7b8b-4fd0-ab4f-ccae0b3a3324" 00:08:08.354 ], 00:08:08.354 "product_name": "Raid Volume", 00:08:08.354 "block_size": 512, 00:08:08.354 "num_blocks": 196608, 00:08:08.354 "uuid": "d45c7e82-7b8b-4fd0-ab4f-ccae0b3a3324", 00:08:08.354 "assigned_rate_limits": { 00:08:08.354 "rw_ios_per_sec": 0, 00:08:08.354 "rw_mbytes_per_sec": 0, 00:08:08.354 "r_mbytes_per_sec": 0, 00:08:08.354 "w_mbytes_per_sec": 0 00:08:08.354 }, 00:08:08.354 "claimed": false, 00:08:08.354 "zoned": false, 00:08:08.354 "supported_io_types": { 00:08:08.354 "read": true, 00:08:08.354 "write": true, 00:08:08.354 "unmap": true, 00:08:08.354 "flush": true, 00:08:08.354 "reset": true, 00:08:08.354 "nvme_admin": false, 00:08:08.354 "nvme_io": false, 00:08:08.354 "nvme_io_md": false, 00:08:08.354 "write_zeroes": true, 00:08:08.354 "zcopy": false, 00:08:08.354 "get_zone_info": false, 00:08:08.354 "zone_management": false, 00:08:08.354 "zone_append": false, 00:08:08.354 "compare": false, 00:08:08.354 "compare_and_write": false, 00:08:08.354 "abort": false, 00:08:08.354 "seek_hole": false, 00:08:08.354 "seek_data": false, 00:08:08.354 "copy": false, 00:08:08.354 "nvme_iov_md": false 00:08:08.354 }, 00:08:08.355 "memory_domains": [ 00:08:08.355 { 00:08:08.355 "dma_device_id": "system", 00:08:08.355 "dma_device_type": 1 00:08:08.355 }, 00:08:08.355 { 00:08:08.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.355 "dma_device_type": 2 00:08:08.355 }, 00:08:08.355 { 00:08:08.355 "dma_device_id": "system", 00:08:08.355 "dma_device_type": 1 00:08:08.355 }, 00:08:08.355 { 00:08:08.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.355 "dma_device_type": 2 00:08:08.355 }, 00:08:08.355 { 00:08:08.355 "dma_device_id": "system", 00:08:08.355 "dma_device_type": 1 00:08:08.355 }, 00:08:08.355 { 00:08:08.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.355 "dma_device_type": 2 00:08:08.355 } 00:08:08.355 ], 00:08:08.355 "driver_specific": { 00:08:08.355 "raid": { 00:08:08.355 "uuid": "d45c7e82-7b8b-4fd0-ab4f-ccae0b3a3324", 00:08:08.355 "strip_size_kb": 64, 00:08:08.355 "state": "online", 00:08:08.355 "raid_level": "raid0", 00:08:08.355 "superblock": false, 00:08:08.355 "num_base_bdevs": 3, 00:08:08.355 "num_base_bdevs_discovered": 3, 00:08:08.355 "num_base_bdevs_operational": 3, 00:08:08.355 "base_bdevs_list": [ 00:08:08.355 { 00:08:08.355 "name": "NewBaseBdev", 00:08:08.355 "uuid": "0a3c27a2-4641-474d-b5b7-e1dc4b8fce4e", 00:08:08.355 "is_configured": true, 00:08:08.355 "data_offset": 0, 00:08:08.355 "data_size": 65536 00:08:08.355 }, 00:08:08.355 { 00:08:08.355 "name": "BaseBdev2", 00:08:08.355 "uuid": "eea4cc54-3973-40ba-9627-2cf0f1f54331", 00:08:08.355 "is_configured": true, 00:08:08.355 "data_offset": 0, 00:08:08.355 "data_size": 65536 00:08:08.355 }, 00:08:08.355 { 00:08:08.355 "name": "BaseBdev3", 00:08:08.355 "uuid": "fd88e818-e80e-4fe5-a601-bb61a047ae5d", 00:08:08.355 "is_configured": true, 00:08:08.355 "data_offset": 0, 00:08:08.355 "data_size": 65536 00:08:08.355 } 00:08:08.355 ] 00:08:08.355 } 00:08:08.355 } 00:08:08.355 }' 00:08:08.355 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:08.616 BaseBdev2 00:08:08.616 BaseBdev3' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.616 15:24:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.616 [2024-11-26 15:24:07.010670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.616 [2024-11-26 15:24:07.010696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.616 [2024-11-26 15:24:07.010763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.616 [2024-11-26 15:24:07.010814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.616 [2024-11-26 15:24:07.010823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76638 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76638 ']' 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76638 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76638 00:08:08.616 killing process with pid 76638 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76638' 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76638 00:08:08.616 [2024-11-26 15:24:07.058931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.616 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76638 00:08:08.876 [2024-11-26 15:24:07.090299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.876 ************************************ 00:08:08.876 END TEST raid_state_function_test 00:08:08.876 ************************************ 00:08:08.876 15:24:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:08.876 00:08:08.876 real 0m8.542s 00:08:08.876 user 0m14.614s 00:08:08.876 sys 0m1.625s 00:08:08.876 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.876 15:24:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 15:24:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:09.136 15:24:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.136 15:24:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.136 15:24:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 ************************************ 00:08:09.136 START TEST raid_state_function_test_sb 00:08:09.136 ************************************ 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:09.136 Process raid pid: 77237 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77237 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77237' 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77237 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77237 ']' 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.136 15:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.136 [2024-11-26 15:24:07.472271] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:09.136 [2024-11-26 15:24:07.472499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.136 [2024-11-26 15:24:07.607351] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.396 [2024-11-26 15:24:07.643367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.396 [2024-11-26 15:24:07.668147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.396 [2024-11-26 15:24:07.710566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.396 [2024-11-26 15:24:07.710672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.965 [2024-11-26 15:24:08.305228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.965 [2024-11-26 15:24:08.305327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.965 [2024-11-26 15:24:08.305377] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.965 [2024-11-26 15:24:08.305389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.965 [2024-11-26 15:24:08.305401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.965 [2024-11-26 15:24:08.305409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.965 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.966 "name": "Existed_Raid", 00:08:09.966 "uuid": "7f74b4fe-eafb-4d97-a755-383146ba6f14", 00:08:09.966 "strip_size_kb": 64, 00:08:09.966 "state": "configuring", 00:08:09.966 "raid_level": "raid0", 00:08:09.966 "superblock": true, 00:08:09.966 "num_base_bdevs": 3, 00:08:09.966 "num_base_bdevs_discovered": 0, 00:08:09.966 "num_base_bdevs_operational": 3, 00:08:09.966 "base_bdevs_list": [ 00:08:09.966 { 00:08:09.966 "name": "BaseBdev1", 00:08:09.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.966 "is_configured": false, 00:08:09.966 "data_offset": 0, 00:08:09.966 "data_size": 0 00:08:09.966 }, 00:08:09.966 { 00:08:09.966 "name": "BaseBdev2", 00:08:09.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.966 "is_configured": false, 00:08:09.966 "data_offset": 0, 00:08:09.966 "data_size": 0 00:08:09.966 }, 00:08:09.966 { 00:08:09.966 "name": "BaseBdev3", 00:08:09.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.966 "is_configured": false, 00:08:09.966 "data_offset": 0, 00:08:09.966 "data_size": 0 00:08:09.966 } 00:08:09.966 ] 00:08:09.966 }' 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.966 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.225 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.225 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.225 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.225 [2024-11-26 15:24:08.697222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.225 [2024-11-26 15:24:08.697300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.486 [2024-11-26 15:24:08.709278] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.486 [2024-11-26 15:24:08.709354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.486 [2024-11-26 15:24:08.709383] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.486 [2024-11-26 15:24:08.709403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.486 [2024-11-26 15:24:08.709422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.486 [2024-11-26 15:24:08.709440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.486 [2024-11-26 15:24:08.730092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.486 BaseBdev1 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.486 [ 00:08:10.486 { 00:08:10.486 "name": "BaseBdev1", 00:08:10.486 "aliases": [ 00:08:10.486 "81aa83bd-d750-4c28-ace3-a1ca9db300fa" 00:08:10.486 ], 00:08:10.486 "product_name": "Malloc disk", 00:08:10.486 "block_size": 512, 00:08:10.486 "num_blocks": 65536, 00:08:10.486 "uuid": "81aa83bd-d750-4c28-ace3-a1ca9db300fa", 00:08:10.486 "assigned_rate_limits": { 00:08:10.486 "rw_ios_per_sec": 0, 00:08:10.486 "rw_mbytes_per_sec": 0, 00:08:10.486 "r_mbytes_per_sec": 0, 00:08:10.486 "w_mbytes_per_sec": 0 00:08:10.486 }, 00:08:10.486 "claimed": true, 00:08:10.486 "claim_type": "exclusive_write", 00:08:10.486 "zoned": false, 00:08:10.486 "supported_io_types": { 00:08:10.486 "read": true, 00:08:10.486 "write": true, 00:08:10.486 "unmap": true, 00:08:10.486 "flush": true, 00:08:10.486 "reset": true, 00:08:10.486 "nvme_admin": false, 00:08:10.486 "nvme_io": false, 00:08:10.486 "nvme_io_md": false, 00:08:10.486 "write_zeroes": true, 00:08:10.486 "zcopy": true, 00:08:10.486 "get_zone_info": false, 00:08:10.486 "zone_management": false, 00:08:10.486 "zone_append": false, 00:08:10.486 "compare": false, 00:08:10.486 "compare_and_write": false, 00:08:10.486 "abort": true, 00:08:10.486 "seek_hole": false, 00:08:10.486 "seek_data": false, 00:08:10.486 "copy": true, 00:08:10.486 "nvme_iov_md": false 00:08:10.486 }, 00:08:10.486 "memory_domains": [ 00:08:10.486 { 00:08:10.486 "dma_device_id": "system", 00:08:10.486 "dma_device_type": 1 00:08:10.486 }, 00:08:10.486 { 00:08:10.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.486 "dma_device_type": 2 00:08:10.486 } 00:08:10.486 ], 00:08:10.486 "driver_specific": {} 00:08:10.486 } 00:08:10.486 ] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.486 "name": "Existed_Raid", 00:08:10.486 "uuid": "92147642-6b9c-495a-85ed-888f5eeaee0b", 00:08:10.486 "strip_size_kb": 64, 00:08:10.486 "state": "configuring", 00:08:10.486 "raid_level": "raid0", 00:08:10.486 "superblock": true, 00:08:10.486 "num_base_bdevs": 3, 00:08:10.486 "num_base_bdevs_discovered": 1, 00:08:10.486 "num_base_bdevs_operational": 3, 00:08:10.486 "base_bdevs_list": [ 00:08:10.486 { 00:08:10.486 "name": "BaseBdev1", 00:08:10.486 "uuid": "81aa83bd-d750-4c28-ace3-a1ca9db300fa", 00:08:10.486 "is_configured": true, 00:08:10.486 "data_offset": 2048, 00:08:10.486 "data_size": 63488 00:08:10.486 }, 00:08:10.486 { 00:08:10.486 "name": "BaseBdev2", 00:08:10.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.486 "is_configured": false, 00:08:10.486 "data_offset": 0, 00:08:10.486 "data_size": 0 00:08:10.486 }, 00:08:10.486 { 00:08:10.486 "name": "BaseBdev3", 00:08:10.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.486 "is_configured": false, 00:08:10.486 "data_offset": 0, 00:08:10.486 "data_size": 0 00:08:10.486 } 00:08:10.486 ] 00:08:10.486 }' 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.486 15:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.746 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.746 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.746 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.746 [2024-11-26 15:24:09.210259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.746 [2024-11-26 15:24:09.210372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:10.746 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.746 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.746 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.746 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.746 [2024-11-26 15:24:09.218326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.746 [2024-11-26 15:24:09.220108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.746 [2024-11-26 15:24:09.220150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.746 [2024-11-26 15:24:09.220163] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.746 [2024-11-26 15:24:09.220186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.005 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.005 "name": "Existed_Raid", 00:08:11.005 "uuid": "ca67ce76-3b6f-467b-a991-a5cf03ee2dbe", 00:08:11.005 "strip_size_kb": 64, 00:08:11.005 "state": "configuring", 00:08:11.005 "raid_level": "raid0", 00:08:11.005 "superblock": true, 00:08:11.005 "num_base_bdevs": 3, 00:08:11.005 "num_base_bdevs_discovered": 1, 00:08:11.005 "num_base_bdevs_operational": 3, 00:08:11.006 "base_bdevs_list": [ 00:08:11.006 { 00:08:11.006 "name": "BaseBdev1", 00:08:11.006 "uuid": "81aa83bd-d750-4c28-ace3-a1ca9db300fa", 00:08:11.006 "is_configured": true, 00:08:11.006 "data_offset": 2048, 00:08:11.006 "data_size": 63488 00:08:11.006 }, 00:08:11.006 { 00:08:11.006 "name": "BaseBdev2", 00:08:11.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.006 "is_configured": false, 00:08:11.006 "data_offset": 0, 00:08:11.006 "data_size": 0 00:08:11.006 }, 00:08:11.006 { 00:08:11.006 "name": "BaseBdev3", 00:08:11.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.006 "is_configured": false, 00:08:11.006 "data_offset": 0, 00:08:11.006 "data_size": 0 00:08:11.006 } 00:08:11.006 ] 00:08:11.006 }' 00:08:11.006 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.006 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.265 [2024-11-26 15:24:09.685461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.265 BaseBdev2 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.265 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.265 [ 00:08:11.265 { 00:08:11.265 "name": "BaseBdev2", 00:08:11.265 "aliases": [ 00:08:11.265 "48a75e3c-a5d5-4d44-ba3a-d0385bd792a5" 00:08:11.265 ], 00:08:11.265 "product_name": "Malloc disk", 00:08:11.265 "block_size": 512, 00:08:11.265 "num_blocks": 65536, 00:08:11.265 "uuid": "48a75e3c-a5d5-4d44-ba3a-d0385bd792a5", 00:08:11.265 "assigned_rate_limits": { 00:08:11.265 "rw_ios_per_sec": 0, 00:08:11.265 "rw_mbytes_per_sec": 0, 00:08:11.265 "r_mbytes_per_sec": 0, 00:08:11.265 "w_mbytes_per_sec": 0 00:08:11.265 }, 00:08:11.266 "claimed": true, 00:08:11.266 "claim_type": "exclusive_write", 00:08:11.266 "zoned": false, 00:08:11.266 "supported_io_types": { 00:08:11.266 "read": true, 00:08:11.266 "write": true, 00:08:11.266 "unmap": true, 00:08:11.266 "flush": true, 00:08:11.266 "reset": true, 00:08:11.266 "nvme_admin": false, 00:08:11.266 "nvme_io": false, 00:08:11.266 "nvme_io_md": false, 00:08:11.266 "write_zeroes": true, 00:08:11.266 "zcopy": true, 00:08:11.266 "get_zone_info": false, 00:08:11.266 "zone_management": false, 00:08:11.266 "zone_append": false, 00:08:11.266 "compare": false, 00:08:11.266 "compare_and_write": false, 00:08:11.266 "abort": true, 00:08:11.266 "seek_hole": false, 00:08:11.266 "seek_data": false, 00:08:11.266 "copy": true, 00:08:11.266 "nvme_iov_md": false 00:08:11.266 }, 00:08:11.266 "memory_domains": [ 00:08:11.266 { 00:08:11.266 "dma_device_id": "system", 00:08:11.266 "dma_device_type": 1 00:08:11.266 }, 00:08:11.266 { 00:08:11.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.266 "dma_device_type": 2 00:08:11.266 } 00:08:11.266 ], 00:08:11.266 "driver_specific": {} 00:08:11.266 } 00:08:11.266 ] 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.266 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.526 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.526 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.526 "name": "Existed_Raid", 00:08:11.526 "uuid": "ca67ce76-3b6f-467b-a991-a5cf03ee2dbe", 00:08:11.526 "strip_size_kb": 64, 00:08:11.526 "state": "configuring", 00:08:11.526 "raid_level": "raid0", 00:08:11.526 "superblock": true, 00:08:11.526 "num_base_bdevs": 3, 00:08:11.526 "num_base_bdevs_discovered": 2, 00:08:11.526 "num_base_bdevs_operational": 3, 00:08:11.526 "base_bdevs_list": [ 00:08:11.526 { 00:08:11.526 "name": "BaseBdev1", 00:08:11.526 "uuid": "81aa83bd-d750-4c28-ace3-a1ca9db300fa", 00:08:11.526 "is_configured": true, 00:08:11.526 "data_offset": 2048, 00:08:11.526 "data_size": 63488 00:08:11.526 }, 00:08:11.526 { 00:08:11.526 "name": "BaseBdev2", 00:08:11.526 "uuid": "48a75e3c-a5d5-4d44-ba3a-d0385bd792a5", 00:08:11.526 "is_configured": true, 00:08:11.526 "data_offset": 2048, 00:08:11.526 "data_size": 63488 00:08:11.526 }, 00:08:11.526 { 00:08:11.526 "name": "BaseBdev3", 00:08:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.526 "is_configured": false, 00:08:11.526 "data_offset": 0, 00:08:11.526 "data_size": 0 00:08:11.526 } 00:08:11.526 ] 00:08:11.526 }' 00:08:11.526 15:24:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.526 15:24:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.785 [2024-11-26 15:24:10.160498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.785 [2024-11-26 15:24:10.160686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:11.785 [2024-11-26 15:24:10.160702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:11.785 [2024-11-26 15:24:10.160997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:11.785 [2024-11-26 15:24:10.161122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:11.785 [2024-11-26 15:24:10.161135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:11.785 BaseBdev3 00:08:11.785 [2024-11-26 15:24:10.161261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.785 [ 00:08:11.785 { 00:08:11.785 "name": "BaseBdev3", 00:08:11.785 "aliases": [ 00:08:11.785 "a5e06403-82d4-4143-8b37-d22a5dea55af" 00:08:11.785 ], 00:08:11.785 "product_name": "Malloc disk", 00:08:11.785 "block_size": 512, 00:08:11.785 "num_blocks": 65536, 00:08:11.785 "uuid": "a5e06403-82d4-4143-8b37-d22a5dea55af", 00:08:11.785 "assigned_rate_limits": { 00:08:11.785 "rw_ios_per_sec": 0, 00:08:11.785 "rw_mbytes_per_sec": 0, 00:08:11.785 "r_mbytes_per_sec": 0, 00:08:11.785 "w_mbytes_per_sec": 0 00:08:11.785 }, 00:08:11.785 "claimed": true, 00:08:11.785 "claim_type": "exclusive_write", 00:08:11.785 "zoned": false, 00:08:11.785 "supported_io_types": { 00:08:11.785 "read": true, 00:08:11.785 "write": true, 00:08:11.785 "unmap": true, 00:08:11.785 "flush": true, 00:08:11.785 "reset": true, 00:08:11.785 "nvme_admin": false, 00:08:11.785 "nvme_io": false, 00:08:11.785 "nvme_io_md": false, 00:08:11.785 "write_zeroes": true, 00:08:11.785 "zcopy": true, 00:08:11.785 "get_zone_info": false, 00:08:11.785 "zone_management": false, 00:08:11.785 "zone_append": false, 00:08:11.785 "compare": false, 00:08:11.785 "compare_and_write": false, 00:08:11.785 "abort": true, 00:08:11.785 "seek_hole": false, 00:08:11.785 "seek_data": false, 00:08:11.785 "copy": true, 00:08:11.785 "nvme_iov_md": false 00:08:11.785 }, 00:08:11.785 "memory_domains": [ 00:08:11.785 { 00:08:11.785 "dma_device_id": "system", 00:08:11.785 "dma_device_type": 1 00:08:11.785 }, 00:08:11.785 { 00:08:11.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.785 "dma_device_type": 2 00:08:11.785 } 00:08:11.785 ], 00:08:11.785 "driver_specific": {} 00:08:11.785 } 00:08:11.785 ] 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.785 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.785 "name": "Existed_Raid", 00:08:11.785 "uuid": "ca67ce76-3b6f-467b-a991-a5cf03ee2dbe", 00:08:11.785 "strip_size_kb": 64, 00:08:11.785 "state": "online", 00:08:11.785 "raid_level": "raid0", 00:08:11.785 "superblock": true, 00:08:11.785 "num_base_bdevs": 3, 00:08:11.785 "num_base_bdevs_discovered": 3, 00:08:11.785 "num_base_bdevs_operational": 3, 00:08:11.785 "base_bdevs_list": [ 00:08:11.785 { 00:08:11.785 "name": "BaseBdev1", 00:08:11.785 "uuid": "81aa83bd-d750-4c28-ace3-a1ca9db300fa", 00:08:11.785 "is_configured": true, 00:08:11.785 "data_offset": 2048, 00:08:11.785 "data_size": 63488 00:08:11.785 }, 00:08:11.785 { 00:08:11.785 "name": "BaseBdev2", 00:08:11.785 "uuid": "48a75e3c-a5d5-4d44-ba3a-d0385bd792a5", 00:08:11.785 "is_configured": true, 00:08:11.785 "data_offset": 2048, 00:08:11.785 "data_size": 63488 00:08:11.785 }, 00:08:11.785 { 00:08:11.785 "name": "BaseBdev3", 00:08:11.786 "uuid": "a5e06403-82d4-4143-8b37-d22a5dea55af", 00:08:11.786 "is_configured": true, 00:08:11.786 "data_offset": 2048, 00:08:11.786 "data_size": 63488 00:08:11.786 } 00:08:11.786 ] 00:08:11.786 }' 00:08:11.786 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.786 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.355 [2024-11-26 15:24:10.608960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.355 "name": "Existed_Raid", 00:08:12.355 "aliases": [ 00:08:12.355 "ca67ce76-3b6f-467b-a991-a5cf03ee2dbe" 00:08:12.355 ], 00:08:12.355 "product_name": "Raid Volume", 00:08:12.355 "block_size": 512, 00:08:12.355 "num_blocks": 190464, 00:08:12.355 "uuid": "ca67ce76-3b6f-467b-a991-a5cf03ee2dbe", 00:08:12.355 "assigned_rate_limits": { 00:08:12.355 "rw_ios_per_sec": 0, 00:08:12.355 "rw_mbytes_per_sec": 0, 00:08:12.355 "r_mbytes_per_sec": 0, 00:08:12.355 "w_mbytes_per_sec": 0 00:08:12.355 }, 00:08:12.355 "claimed": false, 00:08:12.355 "zoned": false, 00:08:12.355 "supported_io_types": { 00:08:12.355 "read": true, 00:08:12.355 "write": true, 00:08:12.355 "unmap": true, 00:08:12.355 "flush": true, 00:08:12.355 "reset": true, 00:08:12.355 "nvme_admin": false, 00:08:12.355 "nvme_io": false, 00:08:12.355 "nvme_io_md": false, 00:08:12.355 "write_zeroes": true, 00:08:12.355 "zcopy": false, 00:08:12.355 "get_zone_info": false, 00:08:12.355 "zone_management": false, 00:08:12.355 "zone_append": false, 00:08:12.355 "compare": false, 00:08:12.355 "compare_and_write": false, 00:08:12.355 "abort": false, 00:08:12.355 "seek_hole": false, 00:08:12.355 "seek_data": false, 00:08:12.355 "copy": false, 00:08:12.355 "nvme_iov_md": false 00:08:12.355 }, 00:08:12.355 "memory_domains": [ 00:08:12.355 { 00:08:12.355 "dma_device_id": "system", 00:08:12.355 "dma_device_type": 1 00:08:12.355 }, 00:08:12.355 { 00:08:12.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.355 "dma_device_type": 2 00:08:12.355 }, 00:08:12.355 { 00:08:12.355 "dma_device_id": "system", 00:08:12.355 "dma_device_type": 1 00:08:12.355 }, 00:08:12.355 { 00:08:12.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.355 "dma_device_type": 2 00:08:12.355 }, 00:08:12.355 { 00:08:12.355 "dma_device_id": "system", 00:08:12.355 "dma_device_type": 1 00:08:12.355 }, 00:08:12.355 { 00:08:12.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.355 "dma_device_type": 2 00:08:12.355 } 00:08:12.355 ], 00:08:12.355 "driver_specific": { 00:08:12.355 "raid": { 00:08:12.355 "uuid": "ca67ce76-3b6f-467b-a991-a5cf03ee2dbe", 00:08:12.355 "strip_size_kb": 64, 00:08:12.355 "state": "online", 00:08:12.355 "raid_level": "raid0", 00:08:12.355 "superblock": true, 00:08:12.355 "num_base_bdevs": 3, 00:08:12.355 "num_base_bdevs_discovered": 3, 00:08:12.355 "num_base_bdevs_operational": 3, 00:08:12.355 "base_bdevs_list": [ 00:08:12.355 { 00:08:12.355 "name": "BaseBdev1", 00:08:12.355 "uuid": "81aa83bd-d750-4c28-ace3-a1ca9db300fa", 00:08:12.355 "is_configured": true, 00:08:12.355 "data_offset": 2048, 00:08:12.355 "data_size": 63488 00:08:12.355 }, 00:08:12.355 { 00:08:12.355 "name": "BaseBdev2", 00:08:12.355 "uuid": "48a75e3c-a5d5-4d44-ba3a-d0385bd792a5", 00:08:12.355 "is_configured": true, 00:08:12.355 "data_offset": 2048, 00:08:12.355 "data_size": 63488 00:08:12.355 }, 00:08:12.355 { 00:08:12.355 "name": "BaseBdev3", 00:08:12.355 "uuid": "a5e06403-82d4-4143-8b37-d22a5dea55af", 00:08:12.355 "is_configured": true, 00:08:12.355 "data_offset": 2048, 00:08:12.355 "data_size": 63488 00:08:12.355 } 00:08:12.355 ] 00:08:12.355 } 00:08:12.355 } 00:08:12.355 }' 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.355 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.356 BaseBdev2 00:08:12.356 BaseBdev3' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.356 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.616 [2024-11-26 15:24:10.860788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.616 [2024-11-26 15:24:10.860830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.616 [2024-11-26 15:24:10.860896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.616 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.617 "name": "Existed_Raid", 00:08:12.617 "uuid": "ca67ce76-3b6f-467b-a991-a5cf03ee2dbe", 00:08:12.617 "strip_size_kb": 64, 00:08:12.617 "state": "offline", 00:08:12.617 "raid_level": "raid0", 00:08:12.617 "superblock": true, 00:08:12.617 "num_base_bdevs": 3, 00:08:12.617 "num_base_bdevs_discovered": 2, 00:08:12.617 "num_base_bdevs_operational": 2, 00:08:12.617 "base_bdevs_list": [ 00:08:12.617 { 00:08:12.617 "name": null, 00:08:12.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.617 "is_configured": false, 00:08:12.617 "data_offset": 0, 00:08:12.617 "data_size": 63488 00:08:12.617 }, 00:08:12.617 { 00:08:12.617 "name": "BaseBdev2", 00:08:12.617 "uuid": "48a75e3c-a5d5-4d44-ba3a-d0385bd792a5", 00:08:12.617 "is_configured": true, 00:08:12.617 "data_offset": 2048, 00:08:12.617 "data_size": 63488 00:08:12.617 }, 00:08:12.617 { 00:08:12.617 "name": "BaseBdev3", 00:08:12.617 "uuid": "a5e06403-82d4-4143-8b37-d22a5dea55af", 00:08:12.617 "is_configured": true, 00:08:12.617 "data_offset": 2048, 00:08:12.617 "data_size": 63488 00:08:12.617 } 00:08:12.617 ] 00:08:12.617 }' 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.617 15:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.877 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.878 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.878 [2024-11-26 15:24:11.336202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.878 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.878 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.878 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.138 [2024-11-26 15:24:11.395568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.138 [2024-11-26 15:24:11.395685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.138 BaseBdev2 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:13.138 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.139 [ 00:08:13.139 { 00:08:13.139 "name": "BaseBdev2", 00:08:13.139 "aliases": [ 00:08:13.139 "1d2a4269-48a4-4e48-a1dd-c8750ed12117" 00:08:13.139 ], 00:08:13.139 "product_name": "Malloc disk", 00:08:13.139 "block_size": 512, 00:08:13.139 "num_blocks": 65536, 00:08:13.139 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:13.139 "assigned_rate_limits": { 00:08:13.139 "rw_ios_per_sec": 0, 00:08:13.139 "rw_mbytes_per_sec": 0, 00:08:13.139 "r_mbytes_per_sec": 0, 00:08:13.139 "w_mbytes_per_sec": 0 00:08:13.139 }, 00:08:13.139 "claimed": false, 00:08:13.139 "zoned": false, 00:08:13.139 "supported_io_types": { 00:08:13.139 "read": true, 00:08:13.139 "write": true, 00:08:13.139 "unmap": true, 00:08:13.139 "flush": true, 00:08:13.139 "reset": true, 00:08:13.139 "nvme_admin": false, 00:08:13.139 "nvme_io": false, 00:08:13.139 "nvme_io_md": false, 00:08:13.139 "write_zeroes": true, 00:08:13.139 "zcopy": true, 00:08:13.139 "get_zone_info": false, 00:08:13.139 "zone_management": false, 00:08:13.139 "zone_append": false, 00:08:13.139 "compare": false, 00:08:13.139 "compare_and_write": false, 00:08:13.139 "abort": true, 00:08:13.139 "seek_hole": false, 00:08:13.139 "seek_data": false, 00:08:13.139 "copy": true, 00:08:13.139 "nvme_iov_md": false 00:08:13.139 }, 00:08:13.139 "memory_domains": [ 00:08:13.139 { 00:08:13.139 "dma_device_id": "system", 00:08:13.139 "dma_device_type": 1 00:08:13.139 }, 00:08:13.139 { 00:08:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.139 "dma_device_type": 2 00:08:13.139 } 00:08:13.139 ], 00:08:13.139 "driver_specific": {} 00:08:13.139 } 00:08:13.139 ] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.139 BaseBdev3 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.139 [ 00:08:13.139 { 00:08:13.139 "name": "BaseBdev3", 00:08:13.139 "aliases": [ 00:08:13.139 "abeaacdb-f737-4383-83d3-2b2ac136bd33" 00:08:13.139 ], 00:08:13.139 "product_name": "Malloc disk", 00:08:13.139 "block_size": 512, 00:08:13.139 "num_blocks": 65536, 00:08:13.139 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:13.139 "assigned_rate_limits": { 00:08:13.139 "rw_ios_per_sec": 0, 00:08:13.139 "rw_mbytes_per_sec": 0, 00:08:13.139 "r_mbytes_per_sec": 0, 00:08:13.139 "w_mbytes_per_sec": 0 00:08:13.139 }, 00:08:13.139 "claimed": false, 00:08:13.139 "zoned": false, 00:08:13.139 "supported_io_types": { 00:08:13.139 "read": true, 00:08:13.139 "write": true, 00:08:13.139 "unmap": true, 00:08:13.139 "flush": true, 00:08:13.139 "reset": true, 00:08:13.139 "nvme_admin": false, 00:08:13.139 "nvme_io": false, 00:08:13.139 "nvme_io_md": false, 00:08:13.139 "write_zeroes": true, 00:08:13.139 "zcopy": true, 00:08:13.139 "get_zone_info": false, 00:08:13.139 "zone_management": false, 00:08:13.139 "zone_append": false, 00:08:13.139 "compare": false, 00:08:13.139 "compare_and_write": false, 00:08:13.139 "abort": true, 00:08:13.139 "seek_hole": false, 00:08:13.139 "seek_data": false, 00:08:13.139 "copy": true, 00:08:13.139 "nvme_iov_md": false 00:08:13.139 }, 00:08:13.139 "memory_domains": [ 00:08:13.139 { 00:08:13.139 "dma_device_id": "system", 00:08:13.139 "dma_device_type": 1 00:08:13.139 }, 00:08:13.139 { 00:08:13.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.139 "dma_device_type": 2 00:08:13.139 } 00:08:13.139 ], 00:08:13.139 "driver_specific": {} 00:08:13.139 } 00:08:13.139 ] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:13.139 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.140 [2024-11-26 15:24:11.564351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.140 [2024-11-26 15:24:11.564453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.140 [2024-11-26 15:24:11.564490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.140 [2024-11-26 15:24:11.566228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.140 "name": "Existed_Raid", 00:08:13.140 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:13.140 "strip_size_kb": 64, 00:08:13.140 "state": "configuring", 00:08:13.140 "raid_level": "raid0", 00:08:13.140 "superblock": true, 00:08:13.140 "num_base_bdevs": 3, 00:08:13.140 "num_base_bdevs_discovered": 2, 00:08:13.140 "num_base_bdevs_operational": 3, 00:08:13.140 "base_bdevs_list": [ 00:08:13.140 { 00:08:13.140 "name": "BaseBdev1", 00:08:13.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.140 "is_configured": false, 00:08:13.140 "data_offset": 0, 00:08:13.140 "data_size": 0 00:08:13.140 }, 00:08:13.140 { 00:08:13.140 "name": "BaseBdev2", 00:08:13.140 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:13.140 "is_configured": true, 00:08:13.140 "data_offset": 2048, 00:08:13.140 "data_size": 63488 00:08:13.140 }, 00:08:13.140 { 00:08:13.140 "name": "BaseBdev3", 00:08:13.140 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:13.140 "is_configured": true, 00:08:13.140 "data_offset": 2048, 00:08:13.140 "data_size": 63488 00:08:13.140 } 00:08:13.140 ] 00:08:13.140 }' 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.140 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.710 [2024-11-26 15:24:11.932416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.710 "name": "Existed_Raid", 00:08:13.710 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:13.710 "strip_size_kb": 64, 00:08:13.710 "state": "configuring", 00:08:13.710 "raid_level": "raid0", 00:08:13.710 "superblock": true, 00:08:13.710 "num_base_bdevs": 3, 00:08:13.710 "num_base_bdevs_discovered": 1, 00:08:13.710 "num_base_bdevs_operational": 3, 00:08:13.710 "base_bdevs_list": [ 00:08:13.710 { 00:08:13.710 "name": "BaseBdev1", 00:08:13.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.710 "is_configured": false, 00:08:13.710 "data_offset": 0, 00:08:13.710 "data_size": 0 00:08:13.710 }, 00:08:13.710 { 00:08:13.710 "name": null, 00:08:13.710 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:13.710 "is_configured": false, 00:08:13.710 "data_offset": 0, 00:08:13.710 "data_size": 63488 00:08:13.710 }, 00:08:13.710 { 00:08:13.710 "name": "BaseBdev3", 00:08:13.710 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:13.710 "is_configured": true, 00:08:13.710 "data_offset": 2048, 00:08:13.710 "data_size": 63488 00:08:13.710 } 00:08:13.710 ] 00:08:13.710 }' 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.710 15:24:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 [2024-11-26 15:24:12.355503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.971 BaseBdev1 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 [ 00:08:13.971 { 00:08:13.971 "name": "BaseBdev1", 00:08:13.971 "aliases": [ 00:08:13.971 "6bfe34a5-428d-45be-b7e0-620ef87f3032" 00:08:13.971 ], 00:08:13.971 "product_name": "Malloc disk", 00:08:13.971 "block_size": 512, 00:08:13.971 "num_blocks": 65536, 00:08:13.971 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:13.971 "assigned_rate_limits": { 00:08:13.971 "rw_ios_per_sec": 0, 00:08:13.971 "rw_mbytes_per_sec": 0, 00:08:13.971 "r_mbytes_per_sec": 0, 00:08:13.971 "w_mbytes_per_sec": 0 00:08:13.971 }, 00:08:13.971 "claimed": true, 00:08:13.971 "claim_type": "exclusive_write", 00:08:13.971 "zoned": false, 00:08:13.971 "supported_io_types": { 00:08:13.971 "read": true, 00:08:13.971 "write": true, 00:08:13.971 "unmap": true, 00:08:13.971 "flush": true, 00:08:13.971 "reset": true, 00:08:13.971 "nvme_admin": false, 00:08:13.971 "nvme_io": false, 00:08:13.971 "nvme_io_md": false, 00:08:13.971 "write_zeroes": true, 00:08:13.971 "zcopy": true, 00:08:13.971 "get_zone_info": false, 00:08:13.971 "zone_management": false, 00:08:13.971 "zone_append": false, 00:08:13.971 "compare": false, 00:08:13.971 "compare_and_write": false, 00:08:13.971 "abort": true, 00:08:13.971 "seek_hole": false, 00:08:13.971 "seek_data": false, 00:08:13.971 "copy": true, 00:08:13.971 "nvme_iov_md": false 00:08:13.971 }, 00:08:13.971 "memory_domains": [ 00:08:13.971 { 00:08:13.971 "dma_device_id": "system", 00:08:13.971 "dma_device_type": 1 00:08:13.971 }, 00:08:13.971 { 00:08:13.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.971 "dma_device_type": 2 00:08:13.971 } 00:08:13.971 ], 00:08:13.971 "driver_specific": {} 00:08:13.971 } 00:08:13.971 ] 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.971 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.972 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.972 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.972 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.972 "name": "Existed_Raid", 00:08:13.972 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:13.972 "strip_size_kb": 64, 00:08:13.972 "state": "configuring", 00:08:13.972 "raid_level": "raid0", 00:08:13.972 "superblock": true, 00:08:13.972 "num_base_bdevs": 3, 00:08:13.972 "num_base_bdevs_discovered": 2, 00:08:13.972 "num_base_bdevs_operational": 3, 00:08:13.972 "base_bdevs_list": [ 00:08:13.972 { 00:08:13.972 "name": "BaseBdev1", 00:08:13.972 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:13.972 "is_configured": true, 00:08:13.972 "data_offset": 2048, 00:08:13.972 "data_size": 63488 00:08:13.972 }, 00:08:13.972 { 00:08:13.972 "name": null, 00:08:13.972 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:13.972 "is_configured": false, 00:08:13.972 "data_offset": 0, 00:08:13.972 "data_size": 63488 00:08:13.972 }, 00:08:13.972 { 00:08:13.972 "name": "BaseBdev3", 00:08:13.972 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:13.972 "is_configured": true, 00:08:13.972 "data_offset": 2048, 00:08:13.972 "data_size": 63488 00:08:13.972 } 00:08:13.972 ] 00:08:13.972 }' 00:08:13.972 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.972 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 [2024-11-26 15:24:12.791671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.542 "name": "Existed_Raid", 00:08:14.542 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:14.542 "strip_size_kb": 64, 00:08:14.542 "state": "configuring", 00:08:14.542 "raid_level": "raid0", 00:08:14.542 "superblock": true, 00:08:14.542 "num_base_bdevs": 3, 00:08:14.542 "num_base_bdevs_discovered": 1, 00:08:14.542 "num_base_bdevs_operational": 3, 00:08:14.542 "base_bdevs_list": [ 00:08:14.542 { 00:08:14.542 "name": "BaseBdev1", 00:08:14.542 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:14.542 "is_configured": true, 00:08:14.542 "data_offset": 2048, 00:08:14.542 "data_size": 63488 00:08:14.542 }, 00:08:14.542 { 00:08:14.542 "name": null, 00:08:14.542 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:14.542 "is_configured": false, 00:08:14.542 "data_offset": 0, 00:08:14.542 "data_size": 63488 00:08:14.542 }, 00:08:14.542 { 00:08:14.542 "name": null, 00:08:14.542 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:14.542 "is_configured": false, 00:08:14.542 "data_offset": 0, 00:08:14.542 "data_size": 63488 00:08:14.542 } 00:08:14.542 ] 00:08:14.542 }' 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.542 15:24:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.802 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 [2024-11-26 15:24:13.275839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.062 "name": "Existed_Raid", 00:08:15.062 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:15.062 "strip_size_kb": 64, 00:08:15.062 "state": "configuring", 00:08:15.062 "raid_level": "raid0", 00:08:15.062 "superblock": true, 00:08:15.062 "num_base_bdevs": 3, 00:08:15.062 "num_base_bdevs_discovered": 2, 00:08:15.062 "num_base_bdevs_operational": 3, 00:08:15.062 "base_bdevs_list": [ 00:08:15.062 { 00:08:15.062 "name": "BaseBdev1", 00:08:15.062 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:15.062 "is_configured": true, 00:08:15.062 "data_offset": 2048, 00:08:15.062 "data_size": 63488 00:08:15.062 }, 00:08:15.062 { 00:08:15.062 "name": null, 00:08:15.062 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:15.062 "is_configured": false, 00:08:15.062 "data_offset": 0, 00:08:15.062 "data_size": 63488 00:08:15.062 }, 00:08:15.062 { 00:08:15.062 "name": "BaseBdev3", 00:08:15.062 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:15.062 "is_configured": true, 00:08:15.062 "data_offset": 2048, 00:08:15.062 "data_size": 63488 00:08:15.062 } 00:08:15.062 ] 00:08:15.062 }' 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.062 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 [2024-11-26 15:24:13.751995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.322 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.582 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.582 "name": "Existed_Raid", 00:08:15.582 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:15.582 "strip_size_kb": 64, 00:08:15.582 "state": "configuring", 00:08:15.582 "raid_level": "raid0", 00:08:15.582 "superblock": true, 00:08:15.582 "num_base_bdevs": 3, 00:08:15.582 "num_base_bdevs_discovered": 1, 00:08:15.582 "num_base_bdevs_operational": 3, 00:08:15.582 "base_bdevs_list": [ 00:08:15.582 { 00:08:15.582 "name": null, 00:08:15.582 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:15.582 "is_configured": false, 00:08:15.582 "data_offset": 0, 00:08:15.582 "data_size": 63488 00:08:15.582 }, 00:08:15.582 { 00:08:15.582 "name": null, 00:08:15.582 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:15.582 "is_configured": false, 00:08:15.583 "data_offset": 0, 00:08:15.583 "data_size": 63488 00:08:15.583 }, 00:08:15.583 { 00:08:15.583 "name": "BaseBdev3", 00:08:15.583 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:15.583 "is_configured": true, 00:08:15.583 "data_offset": 2048, 00:08:15.583 "data_size": 63488 00:08:15.583 } 00:08:15.583 ] 00:08:15.583 }' 00:08:15.583 15:24:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.583 15:24:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.843 [2024-11-26 15:24:14.226562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.843 "name": "Existed_Raid", 00:08:15.843 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:15.843 "strip_size_kb": 64, 00:08:15.843 "state": "configuring", 00:08:15.843 "raid_level": "raid0", 00:08:15.843 "superblock": true, 00:08:15.843 "num_base_bdevs": 3, 00:08:15.843 "num_base_bdevs_discovered": 2, 00:08:15.843 "num_base_bdevs_operational": 3, 00:08:15.843 "base_bdevs_list": [ 00:08:15.843 { 00:08:15.843 "name": null, 00:08:15.843 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:15.843 "is_configured": false, 00:08:15.843 "data_offset": 0, 00:08:15.843 "data_size": 63488 00:08:15.843 }, 00:08:15.843 { 00:08:15.843 "name": "BaseBdev2", 00:08:15.843 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:15.843 "is_configured": true, 00:08:15.843 "data_offset": 2048, 00:08:15.843 "data_size": 63488 00:08:15.843 }, 00:08:15.843 { 00:08:15.843 "name": "BaseBdev3", 00:08:15.843 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:15.843 "is_configured": true, 00:08:15.843 "data_offset": 2048, 00:08:15.843 "data_size": 63488 00:08:15.843 } 00:08:15.843 ] 00:08:15.843 }' 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.843 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6bfe34a5-428d-45be-b7e0-620ef87f3032 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.414 [2024-11-26 15:24:14.757723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:16.414 [2024-11-26 15:24:14.757946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.414 [2024-11-26 15:24:14.757996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.414 [2024-11-26 15:24:14.758263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:16.414 [2024-11-26 15:24:14.758411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.414 [2024-11-26 15:24:14.758459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:16.414 NewBaseBdev 00:08:16.414 [2024-11-26 15:24:14.758600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.414 [ 00:08:16.414 { 00:08:16.414 "name": "NewBaseBdev", 00:08:16.414 "aliases": [ 00:08:16.414 "6bfe34a5-428d-45be-b7e0-620ef87f3032" 00:08:16.414 ], 00:08:16.414 "product_name": "Malloc disk", 00:08:16.414 "block_size": 512, 00:08:16.414 "num_blocks": 65536, 00:08:16.414 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:16.414 "assigned_rate_limits": { 00:08:16.414 "rw_ios_per_sec": 0, 00:08:16.414 "rw_mbytes_per_sec": 0, 00:08:16.414 "r_mbytes_per_sec": 0, 00:08:16.414 "w_mbytes_per_sec": 0 00:08:16.414 }, 00:08:16.414 "claimed": true, 00:08:16.414 "claim_type": "exclusive_write", 00:08:16.414 "zoned": false, 00:08:16.414 "supported_io_types": { 00:08:16.414 "read": true, 00:08:16.414 "write": true, 00:08:16.414 "unmap": true, 00:08:16.414 "flush": true, 00:08:16.414 "reset": true, 00:08:16.414 "nvme_admin": false, 00:08:16.414 "nvme_io": false, 00:08:16.414 "nvme_io_md": false, 00:08:16.414 "write_zeroes": true, 00:08:16.414 "zcopy": true, 00:08:16.414 "get_zone_info": false, 00:08:16.414 "zone_management": false, 00:08:16.414 "zone_append": false, 00:08:16.414 "compare": false, 00:08:16.414 "compare_and_write": false, 00:08:16.414 "abort": true, 00:08:16.414 "seek_hole": false, 00:08:16.414 "seek_data": false, 00:08:16.414 "copy": true, 00:08:16.414 "nvme_iov_md": false 00:08:16.414 }, 00:08:16.414 "memory_domains": [ 00:08:16.414 { 00:08:16.414 "dma_device_id": "system", 00:08:16.414 "dma_device_type": 1 00:08:16.414 }, 00:08:16.414 { 00:08:16.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.414 "dma_device_type": 2 00:08:16.414 } 00:08:16.414 ], 00:08:16.414 "driver_specific": {} 00:08:16.414 } 00:08:16.414 ] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.414 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.415 "name": "Existed_Raid", 00:08:16.415 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:16.415 "strip_size_kb": 64, 00:08:16.415 "state": "online", 00:08:16.415 "raid_level": "raid0", 00:08:16.415 "superblock": true, 00:08:16.415 "num_base_bdevs": 3, 00:08:16.415 "num_base_bdevs_discovered": 3, 00:08:16.415 "num_base_bdevs_operational": 3, 00:08:16.415 "base_bdevs_list": [ 00:08:16.415 { 00:08:16.415 "name": "NewBaseBdev", 00:08:16.415 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:16.415 "is_configured": true, 00:08:16.415 "data_offset": 2048, 00:08:16.415 "data_size": 63488 00:08:16.415 }, 00:08:16.415 { 00:08:16.415 "name": "BaseBdev2", 00:08:16.415 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:16.415 "is_configured": true, 00:08:16.415 "data_offset": 2048, 00:08:16.415 "data_size": 63488 00:08:16.415 }, 00:08:16.415 { 00:08:16.415 "name": "BaseBdev3", 00:08:16.415 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:16.415 "is_configured": true, 00:08:16.415 "data_offset": 2048, 00:08:16.415 "data_size": 63488 00:08:16.415 } 00:08:16.415 ] 00:08:16.415 }' 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.415 15:24:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.985 [2024-11-26 15:24:15.254196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.985 "name": "Existed_Raid", 00:08:16.985 "aliases": [ 00:08:16.985 "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f" 00:08:16.985 ], 00:08:16.985 "product_name": "Raid Volume", 00:08:16.985 "block_size": 512, 00:08:16.985 "num_blocks": 190464, 00:08:16.985 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:16.985 "assigned_rate_limits": { 00:08:16.985 "rw_ios_per_sec": 0, 00:08:16.985 "rw_mbytes_per_sec": 0, 00:08:16.985 "r_mbytes_per_sec": 0, 00:08:16.985 "w_mbytes_per_sec": 0 00:08:16.985 }, 00:08:16.985 "claimed": false, 00:08:16.985 "zoned": false, 00:08:16.985 "supported_io_types": { 00:08:16.985 "read": true, 00:08:16.985 "write": true, 00:08:16.985 "unmap": true, 00:08:16.985 "flush": true, 00:08:16.985 "reset": true, 00:08:16.985 "nvme_admin": false, 00:08:16.985 "nvme_io": false, 00:08:16.985 "nvme_io_md": false, 00:08:16.985 "write_zeroes": true, 00:08:16.985 "zcopy": false, 00:08:16.985 "get_zone_info": false, 00:08:16.985 "zone_management": false, 00:08:16.985 "zone_append": false, 00:08:16.985 "compare": false, 00:08:16.985 "compare_and_write": false, 00:08:16.985 "abort": false, 00:08:16.985 "seek_hole": false, 00:08:16.985 "seek_data": false, 00:08:16.985 "copy": false, 00:08:16.985 "nvme_iov_md": false 00:08:16.985 }, 00:08:16.985 "memory_domains": [ 00:08:16.985 { 00:08:16.985 "dma_device_id": "system", 00:08:16.985 "dma_device_type": 1 00:08:16.985 }, 00:08:16.985 { 00:08:16.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.985 "dma_device_type": 2 00:08:16.985 }, 00:08:16.985 { 00:08:16.985 "dma_device_id": "system", 00:08:16.985 "dma_device_type": 1 00:08:16.985 }, 00:08:16.985 { 00:08:16.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.985 "dma_device_type": 2 00:08:16.985 }, 00:08:16.985 { 00:08:16.985 "dma_device_id": "system", 00:08:16.985 "dma_device_type": 1 00:08:16.985 }, 00:08:16.985 { 00:08:16.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.985 "dma_device_type": 2 00:08:16.985 } 00:08:16.985 ], 00:08:16.985 "driver_specific": { 00:08:16.985 "raid": { 00:08:16.985 "uuid": "cc3d18d5-e945-47eb-b133-6bdb3d69fa6f", 00:08:16.985 "strip_size_kb": 64, 00:08:16.985 "state": "online", 00:08:16.985 "raid_level": "raid0", 00:08:16.985 "superblock": true, 00:08:16.985 "num_base_bdevs": 3, 00:08:16.985 "num_base_bdevs_discovered": 3, 00:08:16.985 "num_base_bdevs_operational": 3, 00:08:16.985 "base_bdevs_list": [ 00:08:16.985 { 00:08:16.985 "name": "NewBaseBdev", 00:08:16.985 "uuid": "6bfe34a5-428d-45be-b7e0-620ef87f3032", 00:08:16.985 "is_configured": true, 00:08:16.985 "data_offset": 2048, 00:08:16.985 "data_size": 63488 00:08:16.985 }, 00:08:16.985 { 00:08:16.985 "name": "BaseBdev2", 00:08:16.985 "uuid": "1d2a4269-48a4-4e48-a1dd-c8750ed12117", 00:08:16.985 "is_configured": true, 00:08:16.985 "data_offset": 2048, 00:08:16.985 "data_size": 63488 00:08:16.985 }, 00:08:16.985 { 00:08:16.985 "name": "BaseBdev3", 00:08:16.985 "uuid": "abeaacdb-f737-4383-83d3-2b2ac136bd33", 00:08:16.985 "is_configured": true, 00:08:16.985 "data_offset": 2048, 00:08:16.985 "data_size": 63488 00:08:16.985 } 00:08:16.985 ] 00:08:16.985 } 00:08:16.985 } 00:08:16.985 }' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:16.985 BaseBdev2 00:08:16.985 BaseBdev3' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.985 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.986 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.986 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.246 [2024-11-26 15:24:15.505961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.246 [2024-11-26 15:24:15.505988] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.246 [2024-11-26 15:24:15.506049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.246 [2024-11-26 15:24:15.506101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.246 [2024-11-26 15:24:15.506117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77237 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77237 ']' 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77237 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77237 00:08:17.246 killing process with pid 77237 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77237' 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77237 00:08:17.246 [2024-11-26 15:24:15.545268] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.246 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77237 00:08:17.246 [2024-11-26 15:24:15.575938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.506 15:24:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.506 00:08:17.506 real 0m8.403s 00:08:17.506 user 0m14.404s 00:08:17.506 sys 0m1.582s 00:08:17.506 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.506 ************************************ 00:08:17.506 END TEST raid_state_function_test_sb 00:08:17.506 ************************************ 00:08:17.506 15:24:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.506 15:24:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:17.506 15:24:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:17.506 15:24:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.506 15:24:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.506 ************************************ 00:08:17.506 START TEST raid_superblock_test 00:08:17.506 ************************************ 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77841 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77841 00:08:17.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 77841 ']' 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.506 15:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.506 [2024-11-26 15:24:15.937140] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:17.506 [2024-11-26 15:24:15.937366] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77841 ] 00:08:17.766 [2024-11-26 15:24:16.071161] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.766 [2024-11-26 15:24:16.109936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.766 [2024-11-26 15:24:16.135610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.766 [2024-11-26 15:24:16.178516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.766 [2024-11-26 15:24:16.178628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.336 malloc1 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.336 [2024-11-26 15:24:16.778467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.336 [2024-11-26 15:24:16.778535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.336 [2024-11-26 15:24:16.778563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.336 [2024-11-26 15:24:16.778574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.336 [2024-11-26 15:24:16.780745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.336 [2024-11-26 15:24:16.780782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.336 pt1 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.336 malloc2 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.336 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.596 [2024-11-26 15:24:16.811347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.596 [2024-11-26 15:24:16.811450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.596 [2024-11-26 15:24:16.811500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.596 [2024-11-26 15:24:16.811535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.596 [2024-11-26 15:24:16.813737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.596 [2024-11-26 15:24:16.813831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.596 pt2 00:08:18.596 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.596 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.596 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.596 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:18.596 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.597 malloc3 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.597 [2024-11-26 15:24:16.840073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:18.597 [2024-11-26 15:24:16.840186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.597 [2024-11-26 15:24:16.840225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:18.597 [2024-11-26 15:24:16.840264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.597 [2024-11-26 15:24:16.842356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.597 [2024-11-26 15:24:16.842426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:18.597 pt3 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.597 [2024-11-26 15:24:16.852112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.597 [2024-11-26 15:24:16.853980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.597 [2024-11-26 15:24:16.854084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:18.597 [2024-11-26 15:24:16.854230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:18.597 [2024-11-26 15:24:16.854244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:18.597 [2024-11-26 15:24:16.854477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:18.597 [2024-11-26 15:24:16.854615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:18.597 [2024-11-26 15:24:16.854625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:18.597 [2024-11-26 15:24:16.854743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.597 "name": "raid_bdev1", 00:08:18.597 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:18.597 "strip_size_kb": 64, 00:08:18.597 "state": "online", 00:08:18.597 "raid_level": "raid0", 00:08:18.597 "superblock": true, 00:08:18.597 "num_base_bdevs": 3, 00:08:18.597 "num_base_bdevs_discovered": 3, 00:08:18.597 "num_base_bdevs_operational": 3, 00:08:18.597 "base_bdevs_list": [ 00:08:18.597 { 00:08:18.597 "name": "pt1", 00:08:18.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.597 "is_configured": true, 00:08:18.597 "data_offset": 2048, 00:08:18.597 "data_size": 63488 00:08:18.597 }, 00:08:18.597 { 00:08:18.597 "name": "pt2", 00:08:18.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.597 "is_configured": true, 00:08:18.597 "data_offset": 2048, 00:08:18.597 "data_size": 63488 00:08:18.597 }, 00:08:18.597 { 00:08:18.597 "name": "pt3", 00:08:18.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.597 "is_configured": true, 00:08:18.597 "data_offset": 2048, 00:08:18.597 "data_size": 63488 00:08:18.597 } 00:08:18.597 ] 00:08:18.597 }' 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.597 15:24:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.857 [2024-11-26 15:24:17.272525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.857 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.857 "name": "raid_bdev1", 00:08:18.857 "aliases": [ 00:08:18.857 "b54f57ec-5e3e-4c2d-8245-be70bda2fc69" 00:08:18.857 ], 00:08:18.857 "product_name": "Raid Volume", 00:08:18.857 "block_size": 512, 00:08:18.857 "num_blocks": 190464, 00:08:18.857 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:18.857 "assigned_rate_limits": { 00:08:18.857 "rw_ios_per_sec": 0, 00:08:18.857 "rw_mbytes_per_sec": 0, 00:08:18.857 "r_mbytes_per_sec": 0, 00:08:18.857 "w_mbytes_per_sec": 0 00:08:18.857 }, 00:08:18.857 "claimed": false, 00:08:18.857 "zoned": false, 00:08:18.857 "supported_io_types": { 00:08:18.857 "read": true, 00:08:18.857 "write": true, 00:08:18.857 "unmap": true, 00:08:18.857 "flush": true, 00:08:18.857 "reset": true, 00:08:18.857 "nvme_admin": false, 00:08:18.857 "nvme_io": false, 00:08:18.857 "nvme_io_md": false, 00:08:18.857 "write_zeroes": true, 00:08:18.857 "zcopy": false, 00:08:18.857 "get_zone_info": false, 00:08:18.857 "zone_management": false, 00:08:18.857 "zone_append": false, 00:08:18.857 "compare": false, 00:08:18.857 "compare_and_write": false, 00:08:18.857 "abort": false, 00:08:18.857 "seek_hole": false, 00:08:18.857 "seek_data": false, 00:08:18.857 "copy": false, 00:08:18.857 "nvme_iov_md": false 00:08:18.857 }, 00:08:18.857 "memory_domains": [ 00:08:18.857 { 00:08:18.857 "dma_device_id": "system", 00:08:18.857 "dma_device_type": 1 00:08:18.857 }, 00:08:18.857 { 00:08:18.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.857 "dma_device_type": 2 00:08:18.858 }, 00:08:18.858 { 00:08:18.858 "dma_device_id": "system", 00:08:18.858 "dma_device_type": 1 00:08:18.858 }, 00:08:18.858 { 00:08:18.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.858 "dma_device_type": 2 00:08:18.858 }, 00:08:18.858 { 00:08:18.858 "dma_device_id": "system", 00:08:18.858 "dma_device_type": 1 00:08:18.858 }, 00:08:18.858 { 00:08:18.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.858 "dma_device_type": 2 00:08:18.858 } 00:08:18.858 ], 00:08:18.858 "driver_specific": { 00:08:18.858 "raid": { 00:08:18.858 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:18.858 "strip_size_kb": 64, 00:08:18.858 "state": "online", 00:08:18.858 "raid_level": "raid0", 00:08:18.858 "superblock": true, 00:08:18.858 "num_base_bdevs": 3, 00:08:18.858 "num_base_bdevs_discovered": 3, 00:08:18.858 "num_base_bdevs_operational": 3, 00:08:18.858 "base_bdevs_list": [ 00:08:18.858 { 00:08:18.858 "name": "pt1", 00:08:18.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.858 "is_configured": true, 00:08:18.858 "data_offset": 2048, 00:08:18.858 "data_size": 63488 00:08:18.858 }, 00:08:18.858 { 00:08:18.858 "name": "pt2", 00:08:18.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.858 "is_configured": true, 00:08:18.858 "data_offset": 2048, 00:08:18.858 "data_size": 63488 00:08:18.858 }, 00:08:18.858 { 00:08:18.858 "name": "pt3", 00:08:18.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.858 "is_configured": true, 00:08:18.858 "data_offset": 2048, 00:08:18.858 "data_size": 63488 00:08:18.858 } 00:08:18.858 ] 00:08:18.858 } 00:08:18.858 } 00:08:18.858 }' 00:08:18.858 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.118 pt2 00:08:19.118 pt3' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:19.118 [2024-11-26 15:24:17.544524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b54f57ec-5e3e-4c2d-8245-be70bda2fc69 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b54f57ec-5e3e-4c2d-8245-be70bda2fc69 ']' 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.118 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.379 [2024-11-26 15:24:17.592245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.379 [2024-11-26 15:24:17.592274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.379 [2024-11-26 15:24:17.592343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.379 [2024-11-26 15:24:17.592407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.379 [2024-11-26 15:24:17.592420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.379 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.380 [2024-11-26 15:24:17.740324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.380 [2024-11-26 15:24:17.742158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.380 [2024-11-26 15:24:17.742224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:19.380 [2024-11-26 15:24:17.742267] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.380 [2024-11-26 15:24:17.742309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.380 [2024-11-26 15:24:17.742343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:19.380 [2024-11-26 15:24:17.742357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.380 [2024-11-26 15:24:17.742366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:19.380 request: 00:08:19.380 { 00:08:19.380 "name": "raid_bdev1", 00:08:19.380 "raid_level": "raid0", 00:08:19.380 "base_bdevs": [ 00:08:19.380 "malloc1", 00:08:19.380 "malloc2", 00:08:19.380 "malloc3" 00:08:19.380 ], 00:08:19.380 "strip_size_kb": 64, 00:08:19.380 "superblock": false, 00:08:19.380 "method": "bdev_raid_create", 00:08:19.380 "req_id": 1 00:08:19.380 } 00:08:19.380 Got JSON-RPC error response 00:08:19.380 response: 00:08:19.380 { 00:08:19.380 "code": -17, 00:08:19.380 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.380 } 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.380 [2024-11-26 15:24:17.808302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.380 [2024-11-26 15:24:17.808386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.380 [2024-11-26 15:24:17.808419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:19.380 [2024-11-26 15:24:17.808446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.380 [2024-11-26 15:24:17.810563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.380 [2024-11-26 15:24:17.810631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.380 [2024-11-26 15:24:17.810711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.380 [2024-11-26 15:24:17.810782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.380 pt1 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.380 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.381 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.381 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.381 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.645 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.645 "name": "raid_bdev1", 00:08:19.645 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:19.645 "strip_size_kb": 64, 00:08:19.645 "state": "configuring", 00:08:19.645 "raid_level": "raid0", 00:08:19.645 "superblock": true, 00:08:19.645 "num_base_bdevs": 3, 00:08:19.645 "num_base_bdevs_discovered": 1, 00:08:19.645 "num_base_bdevs_operational": 3, 00:08:19.645 "base_bdevs_list": [ 00:08:19.645 { 00:08:19.645 "name": "pt1", 00:08:19.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.645 "is_configured": true, 00:08:19.645 "data_offset": 2048, 00:08:19.645 "data_size": 63488 00:08:19.645 }, 00:08:19.645 { 00:08:19.645 "name": null, 00:08:19.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.645 "is_configured": false, 00:08:19.645 "data_offset": 2048, 00:08:19.645 "data_size": 63488 00:08:19.645 }, 00:08:19.645 { 00:08:19.645 "name": null, 00:08:19.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.645 "is_configured": false, 00:08:19.645 "data_offset": 2048, 00:08:19.645 "data_size": 63488 00:08:19.645 } 00:08:19.645 ] 00:08:19.645 }' 00:08:19.645 15:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.645 15:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.905 [2024-11-26 15:24:18.220429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.905 [2024-11-26 15:24:18.220491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.905 [2024-11-26 15:24:18.220514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:19.905 [2024-11-26 15:24:18.220523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.905 [2024-11-26 15:24:18.220910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.905 [2024-11-26 15:24:18.220926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.905 [2024-11-26 15:24:18.220991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.905 [2024-11-26 15:24:18.221011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.905 pt2 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.905 [2024-11-26 15:24:18.232473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.905 "name": "raid_bdev1", 00:08:19.905 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:19.905 "strip_size_kb": 64, 00:08:19.905 "state": "configuring", 00:08:19.905 "raid_level": "raid0", 00:08:19.905 "superblock": true, 00:08:19.905 "num_base_bdevs": 3, 00:08:19.905 "num_base_bdevs_discovered": 1, 00:08:19.905 "num_base_bdevs_operational": 3, 00:08:19.905 "base_bdevs_list": [ 00:08:19.905 { 00:08:19.905 "name": "pt1", 00:08:19.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.905 "is_configured": true, 00:08:19.905 "data_offset": 2048, 00:08:19.905 "data_size": 63488 00:08:19.905 }, 00:08:19.905 { 00:08:19.905 "name": null, 00:08:19.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.905 "is_configured": false, 00:08:19.905 "data_offset": 0, 00:08:19.905 "data_size": 63488 00:08:19.905 }, 00:08:19.905 { 00:08:19.905 "name": null, 00:08:19.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.905 "is_configured": false, 00:08:19.905 "data_offset": 2048, 00:08:19.905 "data_size": 63488 00:08:19.905 } 00:08:19.905 ] 00:08:19.905 }' 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.905 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 [2024-11-26 15:24:18.676556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.476 [2024-11-26 15:24:18.676664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.476 [2024-11-26 15:24:18.676696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:20.476 [2024-11-26 15:24:18.676726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.476 [2024-11-26 15:24:18.677157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.476 [2024-11-26 15:24:18.677235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.476 [2024-11-26 15:24:18.677332] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.476 [2024-11-26 15:24:18.677382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.476 pt2 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 [2024-11-26 15:24:18.688530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:20.476 [2024-11-26 15:24:18.688628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.476 [2024-11-26 15:24:18.688657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:20.476 [2024-11-26 15:24:18.688683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.476 [2024-11-26 15:24:18.689022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.476 [2024-11-26 15:24:18.689082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:20.476 [2024-11-26 15:24:18.689160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:20.476 [2024-11-26 15:24:18.689223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:20.476 [2024-11-26 15:24:18.689334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:20.476 [2024-11-26 15:24:18.689372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.476 [2024-11-26 15:24:18.689621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:20.476 [2024-11-26 15:24:18.689763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:20.476 [2024-11-26 15:24:18.689799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:20.476 [2024-11-26 15:24:18.689935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.476 pt3 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.476 "name": "raid_bdev1", 00:08:20.476 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:20.476 "strip_size_kb": 64, 00:08:20.476 "state": "online", 00:08:20.476 "raid_level": "raid0", 00:08:20.476 "superblock": true, 00:08:20.476 "num_base_bdevs": 3, 00:08:20.476 "num_base_bdevs_discovered": 3, 00:08:20.476 "num_base_bdevs_operational": 3, 00:08:20.476 "base_bdevs_list": [ 00:08:20.476 { 00:08:20.476 "name": "pt1", 00:08:20.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.476 "is_configured": true, 00:08:20.476 "data_offset": 2048, 00:08:20.476 "data_size": 63488 00:08:20.476 }, 00:08:20.476 { 00:08:20.476 "name": "pt2", 00:08:20.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.476 "is_configured": true, 00:08:20.476 "data_offset": 2048, 00:08:20.476 "data_size": 63488 00:08:20.476 }, 00:08:20.476 { 00:08:20.476 "name": "pt3", 00:08:20.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.476 "is_configured": true, 00:08:20.476 "data_offset": 2048, 00:08:20.476 "data_size": 63488 00:08:20.476 } 00:08:20.476 ] 00:08:20.476 }' 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.476 15:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.736 [2024-11-26 15:24:19.100939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.736 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.736 "name": "raid_bdev1", 00:08:20.736 "aliases": [ 00:08:20.736 "b54f57ec-5e3e-4c2d-8245-be70bda2fc69" 00:08:20.736 ], 00:08:20.736 "product_name": "Raid Volume", 00:08:20.736 "block_size": 512, 00:08:20.736 "num_blocks": 190464, 00:08:20.737 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:20.737 "assigned_rate_limits": { 00:08:20.737 "rw_ios_per_sec": 0, 00:08:20.737 "rw_mbytes_per_sec": 0, 00:08:20.737 "r_mbytes_per_sec": 0, 00:08:20.737 "w_mbytes_per_sec": 0 00:08:20.737 }, 00:08:20.737 "claimed": false, 00:08:20.737 "zoned": false, 00:08:20.737 "supported_io_types": { 00:08:20.737 "read": true, 00:08:20.737 "write": true, 00:08:20.737 "unmap": true, 00:08:20.737 "flush": true, 00:08:20.737 "reset": true, 00:08:20.737 "nvme_admin": false, 00:08:20.737 "nvme_io": false, 00:08:20.737 "nvme_io_md": false, 00:08:20.737 "write_zeroes": true, 00:08:20.737 "zcopy": false, 00:08:20.737 "get_zone_info": false, 00:08:20.737 "zone_management": false, 00:08:20.737 "zone_append": false, 00:08:20.737 "compare": false, 00:08:20.737 "compare_and_write": false, 00:08:20.737 "abort": false, 00:08:20.737 "seek_hole": false, 00:08:20.737 "seek_data": false, 00:08:20.737 "copy": false, 00:08:20.737 "nvme_iov_md": false 00:08:20.737 }, 00:08:20.737 "memory_domains": [ 00:08:20.737 { 00:08:20.737 "dma_device_id": "system", 00:08:20.737 "dma_device_type": 1 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.737 "dma_device_type": 2 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "system", 00:08:20.737 "dma_device_type": 1 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.737 "dma_device_type": 2 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "system", 00:08:20.737 "dma_device_type": 1 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.737 "dma_device_type": 2 00:08:20.737 } 00:08:20.737 ], 00:08:20.737 "driver_specific": { 00:08:20.737 "raid": { 00:08:20.737 "uuid": "b54f57ec-5e3e-4c2d-8245-be70bda2fc69", 00:08:20.737 "strip_size_kb": 64, 00:08:20.737 "state": "online", 00:08:20.737 "raid_level": "raid0", 00:08:20.737 "superblock": true, 00:08:20.737 "num_base_bdevs": 3, 00:08:20.737 "num_base_bdevs_discovered": 3, 00:08:20.737 "num_base_bdevs_operational": 3, 00:08:20.737 "base_bdevs_list": [ 00:08:20.737 { 00:08:20.737 "name": "pt1", 00:08:20.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.737 "is_configured": true, 00:08:20.737 "data_offset": 2048, 00:08:20.737 "data_size": 63488 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "name": "pt2", 00:08:20.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.737 "is_configured": true, 00:08:20.737 "data_offset": 2048, 00:08:20.737 "data_size": 63488 00:08:20.737 }, 00:08:20.737 { 00:08:20.737 "name": "pt3", 00:08:20.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.737 "is_configured": true, 00:08:20.737 "data_offset": 2048, 00:08:20.737 "data_size": 63488 00:08:20.737 } 00:08:20.737 ] 00:08:20.737 } 00:08:20.737 } 00:08:20.737 }' 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.737 pt2 00:08:20.737 pt3' 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.737 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.996 [2024-11-26 15:24:19.357003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b54f57ec-5e3e-4c2d-8245-be70bda2fc69 '!=' b54f57ec-5e3e-4c2d-8245-be70bda2fc69 ']' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77841 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 77841 ']' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 77841 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77841 00:08:20.996 killing process with pid 77841 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77841' 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 77841 00:08:20.996 [2024-11-26 15:24:19.435363] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.996 [2024-11-26 15:24:19.435446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.996 [2024-11-26 15:24:19.435501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.996 [2024-11-26 15:24:19.435513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:20.996 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 77841 00:08:20.996 [2024-11-26 15:24:19.468074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.256 15:24:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:21.256 00:08:21.256 real 0m3.831s 00:08:21.256 user 0m6.051s 00:08:21.256 sys 0m0.803s 00:08:21.256 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.256 15:24:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.256 ************************************ 00:08:21.256 END TEST raid_superblock_test 00:08:21.256 ************************************ 00:08:21.516 15:24:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:21.516 15:24:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.516 15:24:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.516 15:24:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.516 ************************************ 00:08:21.516 START TEST raid_read_error_test 00:08:21.516 ************************************ 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3oTrHnwe6K 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78072 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78072 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78072 ']' 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.516 15:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.516 [2024-11-26 15:24:19.858381] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:21.516 [2024-11-26 15:24:19.858615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78072 ] 00:08:21.776 [2024-11-26 15:24:19.992931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.776 [2024-11-26 15:24:20.029786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.776 [2024-11-26 15:24:20.055327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.776 [2024-11-26 15:24:20.098015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.776 [2024-11-26 15:24:20.098144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 BaseBdev1_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 true 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 [2024-11-26 15:24:20.705455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.347 [2024-11-26 15:24:20.705550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.347 [2024-11-26 15:24:20.705594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.347 [2024-11-26 15:24:20.705608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.347 [2024-11-26 15:24:20.707662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.347 [2024-11-26 15:24:20.707699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.347 BaseBdev1 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 BaseBdev2_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 true 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 [2024-11-26 15:24:20.746016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.347 [2024-11-26 15:24:20.746066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.347 [2024-11-26 15:24:20.746097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.347 [2024-11-26 15:24:20.746107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.347 [2024-11-26 15:24:20.748121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.347 [2024-11-26 15:24:20.748157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.347 BaseBdev2 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 BaseBdev3_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 true 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 [2024-11-26 15:24:20.786544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:22.347 [2024-11-26 15:24:20.786591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.347 [2024-11-26 15:24:20.786606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:22.347 [2024-11-26 15:24:20.786631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.347 [2024-11-26 15:24:20.788657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.347 [2024-11-26 15:24:20.788753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:22.347 BaseBdev3 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.347 [2024-11-26 15:24:20.798606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.347 [2024-11-26 15:24:20.800420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.347 [2024-11-26 15:24:20.800537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.347 [2024-11-26 15:24:20.800742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.347 [2024-11-26 15:24:20.800758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.347 [2024-11-26 15:24:20.801015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:08:22.347 [2024-11-26 15:24:20.801143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.347 [2024-11-26 15:24:20.801154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.347 [2024-11-26 15:24:20.801279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.347 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.348 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.608 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.608 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.608 "name": "raid_bdev1", 00:08:22.608 "uuid": "66f7eee7-0faa-4077-b99a-a80590d30733", 00:08:22.608 "strip_size_kb": 64, 00:08:22.608 "state": "online", 00:08:22.608 "raid_level": "raid0", 00:08:22.608 "superblock": true, 00:08:22.608 "num_base_bdevs": 3, 00:08:22.608 "num_base_bdevs_discovered": 3, 00:08:22.608 "num_base_bdevs_operational": 3, 00:08:22.608 "base_bdevs_list": [ 00:08:22.608 { 00:08:22.608 "name": "BaseBdev1", 00:08:22.608 "uuid": "ccb17bc3-8563-5503-8d71-aaaf78580ca6", 00:08:22.608 "is_configured": true, 00:08:22.608 "data_offset": 2048, 00:08:22.608 "data_size": 63488 00:08:22.608 }, 00:08:22.608 { 00:08:22.608 "name": "BaseBdev2", 00:08:22.608 "uuid": "26557350-66fc-501a-a80e-6ab076e890c7", 00:08:22.608 "is_configured": true, 00:08:22.608 "data_offset": 2048, 00:08:22.608 "data_size": 63488 00:08:22.608 }, 00:08:22.608 { 00:08:22.608 "name": "BaseBdev3", 00:08:22.608 "uuid": "fbb28f1d-f5b8-56c5-9428-b17ec8bfee7a", 00:08:22.608 "is_configured": true, 00:08:22.608 "data_offset": 2048, 00:08:22.608 "data_size": 63488 00:08:22.608 } 00:08:22.608 ] 00:08:22.608 }' 00:08:22.608 15:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.608 15:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.868 15:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.868 15:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:22.868 [2024-11-26 15:24:21.287097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.808 "name": "raid_bdev1", 00:08:23.808 "uuid": "66f7eee7-0faa-4077-b99a-a80590d30733", 00:08:23.808 "strip_size_kb": 64, 00:08:23.808 "state": "online", 00:08:23.808 "raid_level": "raid0", 00:08:23.808 "superblock": true, 00:08:23.808 "num_base_bdevs": 3, 00:08:23.808 "num_base_bdevs_discovered": 3, 00:08:23.808 "num_base_bdevs_operational": 3, 00:08:23.808 "base_bdevs_list": [ 00:08:23.808 { 00:08:23.808 "name": "BaseBdev1", 00:08:23.808 "uuid": "ccb17bc3-8563-5503-8d71-aaaf78580ca6", 00:08:23.808 "is_configured": true, 00:08:23.808 "data_offset": 2048, 00:08:23.808 "data_size": 63488 00:08:23.808 }, 00:08:23.808 { 00:08:23.808 "name": "BaseBdev2", 00:08:23.808 "uuid": "26557350-66fc-501a-a80e-6ab076e890c7", 00:08:23.808 "is_configured": true, 00:08:23.808 "data_offset": 2048, 00:08:23.808 "data_size": 63488 00:08:23.808 }, 00:08:23.808 { 00:08:23.808 "name": "BaseBdev3", 00:08:23.808 "uuid": "fbb28f1d-f5b8-56c5-9428-b17ec8bfee7a", 00:08:23.808 "is_configured": true, 00:08:23.808 "data_offset": 2048, 00:08:23.808 "data_size": 63488 00:08:23.808 } 00:08:23.808 ] 00:08:23.808 }' 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.808 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.375 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.375 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.375 [2024-11-26 15:24:22.681509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.375 [2024-11-26 15:24:22.681613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.375 [2024-11-26 15:24:22.684238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.375 [2024-11-26 15:24:22.684296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.375 [2024-11-26 15:24:22.684332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.375 [2024-11-26 15:24:22.684342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:24.375 { 00:08:24.375 "results": [ 00:08:24.375 { 00:08:24.375 "job": "raid_bdev1", 00:08:24.375 "core_mask": "0x1", 00:08:24.375 "workload": "randrw", 00:08:24.375 "percentage": 50, 00:08:24.375 "status": "finished", 00:08:24.375 "queue_depth": 1, 00:08:24.375 "io_size": 131072, 00:08:24.375 "runtime": 1.392549, 00:08:24.375 "iops": 17394.72004216728, 00:08:24.375 "mibps": 2174.34000527091, 00:08:24.376 "io_failed": 1, 00:08:24.376 "io_timeout": 0, 00:08:24.376 "avg_latency_us": 79.65889473890857, 00:08:24.376 "min_latency_us": 24.656149219907608, 00:08:24.376 "max_latency_us": 1356.646038525233 00:08:24.376 } 00:08:24.376 ], 00:08:24.376 "core_count": 1 00:08:24.376 } 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78072 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78072 ']' 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78072 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78072 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.376 killing process with pid 78072 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78072' 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78072 00:08:24.376 [2024-11-26 15:24:22.733428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.376 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78072 00:08:24.376 [2024-11-26 15:24:22.758534] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3oTrHnwe6K 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:24.636 00:08:24.636 real 0m3.220s 00:08:24.636 user 0m4.072s 00:08:24.636 sys 0m0.512s 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.636 15:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.636 ************************************ 00:08:24.636 END TEST raid_read_error_test 00:08:24.636 ************************************ 00:08:24.636 15:24:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:24.636 15:24:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.636 15:24:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.636 15:24:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.636 ************************************ 00:08:24.636 START TEST raid_write_error_test 00:08:24.636 ************************************ 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4LaE7LxMz5 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78207 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78207 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78207 ']' 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.636 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.896 [2024-11-26 15:24:23.147613] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:24.896 [2024-11-26 15:24:23.147822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78207 ] 00:08:24.896 [2024-11-26 15:24:23.282380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.896 [2024-11-26 15:24:23.318932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.896 [2024-11-26 15:24:23.344013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.156 [2024-11-26 15:24:23.386726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.156 [2024-11-26 15:24:23.386761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.723 BaseBdev1_malloc 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.723 true 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.723 15:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.723 [2024-11-26 15:24:23.998153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.723 [2024-11-26 15:24:23.998229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.723 [2024-11-26 15:24:23.998250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:25.723 [2024-11-26 15:24:23.998264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.723 [2024-11-26 15:24:24.000472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.723 [2024-11-26 15:24:24.000511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.723 BaseBdev1 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 BaseBdev2_malloc 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 true 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 [2024-11-26 15:24:24.038755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.724 [2024-11-26 15:24:24.038858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.724 [2024-11-26 15:24:24.038877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.724 [2024-11-26 15:24:24.038886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.724 [2024-11-26 15:24:24.040885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.724 [2024-11-26 15:24:24.040924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.724 BaseBdev2 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 BaseBdev3_malloc 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 true 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 [2024-11-26 15:24:24.079220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:25.724 [2024-11-26 15:24:24.079266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.724 [2024-11-26 15:24:24.079297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:25.724 [2024-11-26 15:24:24.079307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.724 [2024-11-26 15:24:24.081264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.724 [2024-11-26 15:24:24.081351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:25.724 BaseBdev3 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 [2024-11-26 15:24:24.091267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.724 [2024-11-26 15:24:24.093053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.724 [2024-11-26 15:24:24.093126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.724 [2024-11-26 15:24:24.093311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.724 [2024-11-26 15:24:24.093323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:25.724 [2024-11-26 15:24:24.093583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:08:25.724 [2024-11-26 15:24:24.093724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.724 [2024-11-26 15:24:24.093741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.724 [2024-11-26 15:24:24.093851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.724 "name": "raid_bdev1", 00:08:25.724 "uuid": "e9b312ac-992f-4cbf-b02c-69c38744e39d", 00:08:25.724 "strip_size_kb": 64, 00:08:25.724 "state": "online", 00:08:25.724 "raid_level": "raid0", 00:08:25.724 "superblock": true, 00:08:25.724 "num_base_bdevs": 3, 00:08:25.724 "num_base_bdevs_discovered": 3, 00:08:25.724 "num_base_bdevs_operational": 3, 00:08:25.724 "base_bdevs_list": [ 00:08:25.724 { 00:08:25.724 "name": "BaseBdev1", 00:08:25.724 "uuid": "5eb43f43-afd2-5c8b-bcc6-cfb89806f37b", 00:08:25.724 "is_configured": true, 00:08:25.724 "data_offset": 2048, 00:08:25.724 "data_size": 63488 00:08:25.724 }, 00:08:25.724 { 00:08:25.724 "name": "BaseBdev2", 00:08:25.724 "uuid": "2cc27e38-fb06-5360-9f89-41bb302755c8", 00:08:25.724 "is_configured": true, 00:08:25.724 "data_offset": 2048, 00:08:25.724 "data_size": 63488 00:08:25.724 }, 00:08:25.724 { 00:08:25.724 "name": "BaseBdev3", 00:08:25.724 "uuid": "456b1916-15ee-5923-9984-38bfa092d7f8", 00:08:25.724 "is_configured": true, 00:08:25.724 "data_offset": 2048, 00:08:25.724 "data_size": 63488 00:08:25.724 } 00:08:25.724 ] 00:08:25.724 }' 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.724 15:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.292 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:26.292 15:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:26.292 [2024-11-26 15:24:24.655798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.245 "name": "raid_bdev1", 00:08:27.245 "uuid": "e9b312ac-992f-4cbf-b02c-69c38744e39d", 00:08:27.245 "strip_size_kb": 64, 00:08:27.245 "state": "online", 00:08:27.245 "raid_level": "raid0", 00:08:27.245 "superblock": true, 00:08:27.245 "num_base_bdevs": 3, 00:08:27.245 "num_base_bdevs_discovered": 3, 00:08:27.245 "num_base_bdevs_operational": 3, 00:08:27.245 "base_bdevs_list": [ 00:08:27.245 { 00:08:27.245 "name": "BaseBdev1", 00:08:27.245 "uuid": "5eb43f43-afd2-5c8b-bcc6-cfb89806f37b", 00:08:27.245 "is_configured": true, 00:08:27.245 "data_offset": 2048, 00:08:27.245 "data_size": 63488 00:08:27.245 }, 00:08:27.245 { 00:08:27.245 "name": "BaseBdev2", 00:08:27.245 "uuid": "2cc27e38-fb06-5360-9f89-41bb302755c8", 00:08:27.245 "is_configured": true, 00:08:27.245 "data_offset": 2048, 00:08:27.245 "data_size": 63488 00:08:27.245 }, 00:08:27.245 { 00:08:27.245 "name": "BaseBdev3", 00:08:27.245 "uuid": "456b1916-15ee-5923-9984-38bfa092d7f8", 00:08:27.245 "is_configured": true, 00:08:27.245 "data_offset": 2048, 00:08:27.245 "data_size": 63488 00:08:27.245 } 00:08:27.245 ] 00:08:27.245 }' 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.245 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.813 [2024-11-26 15:24:25.989966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.813 [2024-11-26 15:24:25.990003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.813 [2024-11-26 15:24:25.992504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.813 [2024-11-26 15:24:25.992576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.813 [2024-11-26 15:24:25.992617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.813 [2024-11-26 15:24:25.992626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.813 { 00:08:27.813 "results": [ 00:08:27.813 { 00:08:27.813 "job": "raid_bdev1", 00:08:27.813 "core_mask": "0x1", 00:08:27.813 "workload": "randrw", 00:08:27.813 "percentage": 50, 00:08:27.813 "status": "finished", 00:08:27.813 "queue_depth": 1, 00:08:27.813 "io_size": 131072, 00:08:27.813 "runtime": 1.332234, 00:08:27.813 "iops": 17358.81234077497, 00:08:27.813 "mibps": 2169.8515425968712, 00:08:27.813 "io_failed": 1, 00:08:27.813 "io_timeout": 0, 00:08:27.813 "avg_latency_us": 79.93987308269142, 00:08:27.813 "min_latency_us": 20.305064063453326, 00:08:27.813 "max_latency_us": 1356.646038525233 00:08:27.813 } 00:08:27.813 ], 00:08:27.813 "core_count": 1 00:08:27.813 } 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78207 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78207 ']' 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78207 00:08:27.813 15:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78207 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.813 killing process with pid 78207 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78207' 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78207 00:08:27.813 [2024-11-26 15:24:26.031172] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78207 00:08:27.813 [2024-11-26 15:24:26.056686] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4LaE7LxMz5 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:27.813 ************************************ 00:08:27.813 END TEST raid_write_error_test 00:08:27.813 ************************************ 00:08:27.813 00:08:27.813 real 0m3.229s 00:08:27.813 user 0m4.100s 00:08:27.813 sys 0m0.515s 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.813 15:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.072 15:24:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:28.072 15:24:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:28.072 15:24:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:28.072 15:24:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.072 15:24:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.072 ************************************ 00:08:28.072 START TEST raid_state_function_test 00:08:28.072 ************************************ 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.072 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:28.073 Process raid pid: 78334 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78334 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78334' 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78334 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78334 ']' 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.073 15:24:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.073 [2024-11-26 15:24:26.441562] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:28.073 [2024-11-26 15:24:26.441743] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.331 [2024-11-26 15:24:26.577897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.331 [2024-11-26 15:24:26.616438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.331 [2024-11-26 15:24:26.641167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.331 [2024-11-26 15:24:26.683961] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.331 [2024-11-26 15:24:26.684075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.900 [2024-11-26 15:24:27.271040] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.900 [2024-11-26 15:24:27.271155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.900 [2024-11-26 15:24:27.271197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.900 [2024-11-26 15:24:27.271219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.900 [2024-11-26 15:24:27.271243] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.900 [2024-11-26 15:24:27.271262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.900 "name": "Existed_Raid", 00:08:28.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.900 "strip_size_kb": 64, 00:08:28.900 "state": "configuring", 00:08:28.900 "raid_level": "concat", 00:08:28.900 "superblock": false, 00:08:28.900 "num_base_bdevs": 3, 00:08:28.900 "num_base_bdevs_discovered": 0, 00:08:28.900 "num_base_bdevs_operational": 3, 00:08:28.900 "base_bdevs_list": [ 00:08:28.900 { 00:08:28.900 "name": "BaseBdev1", 00:08:28.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.900 "is_configured": false, 00:08:28.900 "data_offset": 0, 00:08:28.900 "data_size": 0 00:08:28.900 }, 00:08:28.900 { 00:08:28.900 "name": "BaseBdev2", 00:08:28.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.900 "is_configured": false, 00:08:28.900 "data_offset": 0, 00:08:28.900 "data_size": 0 00:08:28.900 }, 00:08:28.900 { 00:08:28.900 "name": "BaseBdev3", 00:08:28.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.900 "is_configured": false, 00:08:28.900 "data_offset": 0, 00:08:28.900 "data_size": 0 00:08:28.900 } 00:08:28.900 ] 00:08:28.900 }' 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.900 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.469 [2024-11-26 15:24:27.727059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.469 [2024-11-26 15:24:27.727141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.469 [2024-11-26 15:24:27.739086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.469 [2024-11-26 15:24:27.739155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.469 [2024-11-26 15:24:27.739169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.469 [2024-11-26 15:24:27.739205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.469 [2024-11-26 15:24:27.739213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.469 [2024-11-26 15:24:27.739223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.469 [2024-11-26 15:24:27.759867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.469 BaseBdev1 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.469 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.469 [ 00:08:29.469 { 00:08:29.469 "name": "BaseBdev1", 00:08:29.469 "aliases": [ 00:08:29.469 "1e865b0c-1e35-4eec-83a8-8bf1c1e50529" 00:08:29.469 ], 00:08:29.469 "product_name": "Malloc disk", 00:08:29.469 "block_size": 512, 00:08:29.469 "num_blocks": 65536, 00:08:29.469 "uuid": "1e865b0c-1e35-4eec-83a8-8bf1c1e50529", 00:08:29.469 "assigned_rate_limits": { 00:08:29.469 "rw_ios_per_sec": 0, 00:08:29.469 "rw_mbytes_per_sec": 0, 00:08:29.469 "r_mbytes_per_sec": 0, 00:08:29.469 "w_mbytes_per_sec": 0 00:08:29.469 }, 00:08:29.469 "claimed": true, 00:08:29.469 "claim_type": "exclusive_write", 00:08:29.469 "zoned": false, 00:08:29.469 "supported_io_types": { 00:08:29.469 "read": true, 00:08:29.469 "write": true, 00:08:29.469 "unmap": true, 00:08:29.469 "flush": true, 00:08:29.469 "reset": true, 00:08:29.469 "nvme_admin": false, 00:08:29.469 "nvme_io": false, 00:08:29.469 "nvme_io_md": false, 00:08:29.469 "write_zeroes": true, 00:08:29.469 "zcopy": true, 00:08:29.469 "get_zone_info": false, 00:08:29.469 "zone_management": false, 00:08:29.469 "zone_append": false, 00:08:29.469 "compare": false, 00:08:29.469 "compare_and_write": false, 00:08:29.469 "abort": true, 00:08:29.469 "seek_hole": false, 00:08:29.469 "seek_data": false, 00:08:29.469 "copy": true, 00:08:29.469 "nvme_iov_md": false 00:08:29.469 }, 00:08:29.469 "memory_domains": [ 00:08:29.469 { 00:08:29.470 "dma_device_id": "system", 00:08:29.470 "dma_device_type": 1 00:08:29.470 }, 00:08:29.470 { 00:08:29.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.470 "dma_device_type": 2 00:08:29.470 } 00:08:29.470 ], 00:08:29.470 "driver_specific": {} 00:08:29.470 } 00:08:29.470 ] 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.470 "name": "Existed_Raid", 00:08:29.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.470 "strip_size_kb": 64, 00:08:29.470 "state": "configuring", 00:08:29.470 "raid_level": "concat", 00:08:29.470 "superblock": false, 00:08:29.470 "num_base_bdevs": 3, 00:08:29.470 "num_base_bdevs_discovered": 1, 00:08:29.470 "num_base_bdevs_operational": 3, 00:08:29.470 "base_bdevs_list": [ 00:08:29.470 { 00:08:29.470 "name": "BaseBdev1", 00:08:29.470 "uuid": "1e865b0c-1e35-4eec-83a8-8bf1c1e50529", 00:08:29.470 "is_configured": true, 00:08:29.470 "data_offset": 0, 00:08:29.470 "data_size": 65536 00:08:29.470 }, 00:08:29.470 { 00:08:29.470 "name": "BaseBdev2", 00:08:29.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.470 "is_configured": false, 00:08:29.470 "data_offset": 0, 00:08:29.470 "data_size": 0 00:08:29.470 }, 00:08:29.470 { 00:08:29.470 "name": "BaseBdev3", 00:08:29.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.470 "is_configured": false, 00:08:29.470 "data_offset": 0, 00:08:29.470 "data_size": 0 00:08:29.470 } 00:08:29.470 ] 00:08:29.470 }' 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.470 15:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.038 [2024-11-26 15:24:28.248020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.038 [2024-11-26 15:24:28.248141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.038 [2024-11-26 15:24:28.260073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.038 [2024-11-26 15:24:28.261989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.038 [2024-11-26 15:24:28.262064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.038 [2024-11-26 15:24:28.262096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.038 [2024-11-26 15:24:28.262117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.038 "name": "Existed_Raid", 00:08:30.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.038 "strip_size_kb": 64, 00:08:30.038 "state": "configuring", 00:08:30.038 "raid_level": "concat", 00:08:30.038 "superblock": false, 00:08:30.038 "num_base_bdevs": 3, 00:08:30.038 "num_base_bdevs_discovered": 1, 00:08:30.038 "num_base_bdevs_operational": 3, 00:08:30.038 "base_bdevs_list": [ 00:08:30.038 { 00:08:30.038 "name": "BaseBdev1", 00:08:30.038 "uuid": "1e865b0c-1e35-4eec-83a8-8bf1c1e50529", 00:08:30.038 "is_configured": true, 00:08:30.038 "data_offset": 0, 00:08:30.038 "data_size": 65536 00:08:30.038 }, 00:08:30.038 { 00:08:30.038 "name": "BaseBdev2", 00:08:30.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.038 "is_configured": false, 00:08:30.038 "data_offset": 0, 00:08:30.038 "data_size": 0 00:08:30.038 }, 00:08:30.038 { 00:08:30.038 "name": "BaseBdev3", 00:08:30.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.038 "is_configured": false, 00:08:30.038 "data_offset": 0, 00:08:30.038 "data_size": 0 00:08:30.038 } 00:08:30.038 ] 00:08:30.038 }' 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.038 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.298 [2024-11-26 15:24:28.663280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.298 BaseBdev2 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.298 [ 00:08:30.298 { 00:08:30.298 "name": "BaseBdev2", 00:08:30.298 "aliases": [ 00:08:30.298 "8ff39fa8-f0c9-485e-aea7-1aa20eefd53e" 00:08:30.298 ], 00:08:30.298 "product_name": "Malloc disk", 00:08:30.298 "block_size": 512, 00:08:30.298 "num_blocks": 65536, 00:08:30.298 "uuid": "8ff39fa8-f0c9-485e-aea7-1aa20eefd53e", 00:08:30.298 "assigned_rate_limits": { 00:08:30.298 "rw_ios_per_sec": 0, 00:08:30.298 "rw_mbytes_per_sec": 0, 00:08:30.298 "r_mbytes_per_sec": 0, 00:08:30.298 "w_mbytes_per_sec": 0 00:08:30.298 }, 00:08:30.298 "claimed": true, 00:08:30.298 "claim_type": "exclusive_write", 00:08:30.298 "zoned": false, 00:08:30.298 "supported_io_types": { 00:08:30.298 "read": true, 00:08:30.298 "write": true, 00:08:30.298 "unmap": true, 00:08:30.298 "flush": true, 00:08:30.298 "reset": true, 00:08:30.298 "nvme_admin": false, 00:08:30.298 "nvme_io": false, 00:08:30.298 "nvme_io_md": false, 00:08:30.298 "write_zeroes": true, 00:08:30.298 "zcopy": true, 00:08:30.298 "get_zone_info": false, 00:08:30.298 "zone_management": false, 00:08:30.298 "zone_append": false, 00:08:30.298 "compare": false, 00:08:30.298 "compare_and_write": false, 00:08:30.298 "abort": true, 00:08:30.298 "seek_hole": false, 00:08:30.298 "seek_data": false, 00:08:30.298 "copy": true, 00:08:30.298 "nvme_iov_md": false 00:08:30.298 }, 00:08:30.298 "memory_domains": [ 00:08:30.298 { 00:08:30.298 "dma_device_id": "system", 00:08:30.298 "dma_device_type": 1 00:08:30.298 }, 00:08:30.298 { 00:08:30.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.298 "dma_device_type": 2 00:08:30.298 } 00:08:30.298 ], 00:08:30.298 "driver_specific": {} 00:08:30.298 } 00:08:30.298 ] 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.298 "name": "Existed_Raid", 00:08:30.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.298 "strip_size_kb": 64, 00:08:30.298 "state": "configuring", 00:08:30.298 "raid_level": "concat", 00:08:30.298 "superblock": false, 00:08:30.298 "num_base_bdevs": 3, 00:08:30.298 "num_base_bdevs_discovered": 2, 00:08:30.298 "num_base_bdevs_operational": 3, 00:08:30.298 "base_bdevs_list": [ 00:08:30.298 { 00:08:30.298 "name": "BaseBdev1", 00:08:30.298 "uuid": "1e865b0c-1e35-4eec-83a8-8bf1c1e50529", 00:08:30.298 "is_configured": true, 00:08:30.298 "data_offset": 0, 00:08:30.298 "data_size": 65536 00:08:30.298 }, 00:08:30.298 { 00:08:30.298 "name": "BaseBdev2", 00:08:30.298 "uuid": "8ff39fa8-f0c9-485e-aea7-1aa20eefd53e", 00:08:30.298 "is_configured": true, 00:08:30.298 "data_offset": 0, 00:08:30.298 "data_size": 65536 00:08:30.298 }, 00:08:30.298 { 00:08:30.298 "name": "BaseBdev3", 00:08:30.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.298 "is_configured": false, 00:08:30.298 "data_offset": 0, 00:08:30.298 "data_size": 0 00:08:30.298 } 00:08:30.298 ] 00:08:30.298 }' 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.298 15:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.870 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:30.870 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.870 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.870 [2024-11-26 15:24:29.058834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.870 [2024-11-26 15:24:29.059142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:30.870 [2024-11-26 15:24:29.059311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:30.870 [2024-11-26 15:24:29.060467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:30.870 [2024-11-26 15:24:29.061062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:30.870 [2024-11-26 15:24:29.061282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:30.870 [2024-11-26 15:24:29.062107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.870 BaseBdev3 00:08:30.870 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.870 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.871 [ 00:08:30.871 { 00:08:30.871 "name": "BaseBdev3", 00:08:30.871 "aliases": [ 00:08:30.871 "83ef8675-99e0-4c84-bef8-8b791af8f616" 00:08:30.871 ], 00:08:30.871 "product_name": "Malloc disk", 00:08:30.871 "block_size": 512, 00:08:30.871 "num_blocks": 65536, 00:08:30.871 "uuid": "83ef8675-99e0-4c84-bef8-8b791af8f616", 00:08:30.871 "assigned_rate_limits": { 00:08:30.871 "rw_ios_per_sec": 0, 00:08:30.871 "rw_mbytes_per_sec": 0, 00:08:30.871 "r_mbytes_per_sec": 0, 00:08:30.871 "w_mbytes_per_sec": 0 00:08:30.871 }, 00:08:30.871 "claimed": true, 00:08:30.871 "claim_type": "exclusive_write", 00:08:30.871 "zoned": false, 00:08:30.871 "supported_io_types": { 00:08:30.871 "read": true, 00:08:30.871 "write": true, 00:08:30.871 "unmap": true, 00:08:30.871 "flush": true, 00:08:30.871 "reset": true, 00:08:30.871 "nvme_admin": false, 00:08:30.871 "nvme_io": false, 00:08:30.871 "nvme_io_md": false, 00:08:30.871 "write_zeroes": true, 00:08:30.871 "zcopy": true, 00:08:30.871 "get_zone_info": false, 00:08:30.871 "zone_management": false, 00:08:30.871 "zone_append": false, 00:08:30.871 "compare": false, 00:08:30.871 "compare_and_write": false, 00:08:30.871 "abort": true, 00:08:30.871 "seek_hole": false, 00:08:30.871 "seek_data": false, 00:08:30.871 "copy": true, 00:08:30.871 "nvme_iov_md": false 00:08:30.871 }, 00:08:30.871 "memory_domains": [ 00:08:30.871 { 00:08:30.871 "dma_device_id": "system", 00:08:30.871 "dma_device_type": 1 00:08:30.871 }, 00:08:30.871 { 00:08:30.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.871 "dma_device_type": 2 00:08:30.871 } 00:08:30.871 ], 00:08:30.871 "driver_specific": {} 00:08:30.871 } 00:08:30.871 ] 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.871 "name": "Existed_Raid", 00:08:30.871 "uuid": "d9ada207-e64d-4523-a21e-c5005d3762a9", 00:08:30.871 "strip_size_kb": 64, 00:08:30.871 "state": "online", 00:08:30.871 "raid_level": "concat", 00:08:30.871 "superblock": false, 00:08:30.871 "num_base_bdevs": 3, 00:08:30.871 "num_base_bdevs_discovered": 3, 00:08:30.871 "num_base_bdevs_operational": 3, 00:08:30.871 "base_bdevs_list": [ 00:08:30.871 { 00:08:30.871 "name": "BaseBdev1", 00:08:30.871 "uuid": "1e865b0c-1e35-4eec-83a8-8bf1c1e50529", 00:08:30.871 "is_configured": true, 00:08:30.871 "data_offset": 0, 00:08:30.871 "data_size": 65536 00:08:30.871 }, 00:08:30.871 { 00:08:30.871 "name": "BaseBdev2", 00:08:30.871 "uuid": "8ff39fa8-f0c9-485e-aea7-1aa20eefd53e", 00:08:30.871 "is_configured": true, 00:08:30.871 "data_offset": 0, 00:08:30.871 "data_size": 65536 00:08:30.871 }, 00:08:30.871 { 00:08:30.871 "name": "BaseBdev3", 00:08:30.871 "uuid": "83ef8675-99e0-4c84-bef8-8b791af8f616", 00:08:30.871 "is_configured": true, 00:08:30.871 "data_offset": 0, 00:08:30.871 "data_size": 65536 00:08:30.871 } 00:08:30.871 ] 00:08:30.871 }' 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.871 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.132 [2024-11-26 15:24:29.547230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.132 "name": "Existed_Raid", 00:08:31.132 "aliases": [ 00:08:31.132 "d9ada207-e64d-4523-a21e-c5005d3762a9" 00:08:31.132 ], 00:08:31.132 "product_name": "Raid Volume", 00:08:31.132 "block_size": 512, 00:08:31.132 "num_blocks": 196608, 00:08:31.132 "uuid": "d9ada207-e64d-4523-a21e-c5005d3762a9", 00:08:31.132 "assigned_rate_limits": { 00:08:31.132 "rw_ios_per_sec": 0, 00:08:31.132 "rw_mbytes_per_sec": 0, 00:08:31.132 "r_mbytes_per_sec": 0, 00:08:31.132 "w_mbytes_per_sec": 0 00:08:31.132 }, 00:08:31.132 "claimed": false, 00:08:31.132 "zoned": false, 00:08:31.132 "supported_io_types": { 00:08:31.132 "read": true, 00:08:31.132 "write": true, 00:08:31.132 "unmap": true, 00:08:31.132 "flush": true, 00:08:31.132 "reset": true, 00:08:31.132 "nvme_admin": false, 00:08:31.132 "nvme_io": false, 00:08:31.132 "nvme_io_md": false, 00:08:31.132 "write_zeroes": true, 00:08:31.132 "zcopy": false, 00:08:31.132 "get_zone_info": false, 00:08:31.132 "zone_management": false, 00:08:31.132 "zone_append": false, 00:08:31.132 "compare": false, 00:08:31.132 "compare_and_write": false, 00:08:31.132 "abort": false, 00:08:31.132 "seek_hole": false, 00:08:31.132 "seek_data": false, 00:08:31.132 "copy": false, 00:08:31.132 "nvme_iov_md": false 00:08:31.132 }, 00:08:31.132 "memory_domains": [ 00:08:31.132 { 00:08:31.132 "dma_device_id": "system", 00:08:31.132 "dma_device_type": 1 00:08:31.132 }, 00:08:31.132 { 00:08:31.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.132 "dma_device_type": 2 00:08:31.132 }, 00:08:31.132 { 00:08:31.132 "dma_device_id": "system", 00:08:31.132 "dma_device_type": 1 00:08:31.132 }, 00:08:31.132 { 00:08:31.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.132 "dma_device_type": 2 00:08:31.132 }, 00:08:31.132 { 00:08:31.132 "dma_device_id": "system", 00:08:31.132 "dma_device_type": 1 00:08:31.132 }, 00:08:31.132 { 00:08:31.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.132 "dma_device_type": 2 00:08:31.132 } 00:08:31.132 ], 00:08:31.132 "driver_specific": { 00:08:31.132 "raid": { 00:08:31.132 "uuid": "d9ada207-e64d-4523-a21e-c5005d3762a9", 00:08:31.132 "strip_size_kb": 64, 00:08:31.132 "state": "online", 00:08:31.132 "raid_level": "concat", 00:08:31.132 "superblock": false, 00:08:31.132 "num_base_bdevs": 3, 00:08:31.132 "num_base_bdevs_discovered": 3, 00:08:31.132 "num_base_bdevs_operational": 3, 00:08:31.132 "base_bdevs_list": [ 00:08:31.132 { 00:08:31.132 "name": "BaseBdev1", 00:08:31.132 "uuid": "1e865b0c-1e35-4eec-83a8-8bf1c1e50529", 00:08:31.132 "is_configured": true, 00:08:31.132 "data_offset": 0, 00:08:31.132 "data_size": 65536 00:08:31.132 }, 00:08:31.132 { 00:08:31.132 "name": "BaseBdev2", 00:08:31.132 "uuid": "8ff39fa8-f0c9-485e-aea7-1aa20eefd53e", 00:08:31.132 "is_configured": true, 00:08:31.132 "data_offset": 0, 00:08:31.132 "data_size": 65536 00:08:31.132 }, 00:08:31.132 { 00:08:31.132 "name": "BaseBdev3", 00:08:31.132 "uuid": "83ef8675-99e0-4c84-bef8-8b791af8f616", 00:08:31.132 "is_configured": true, 00:08:31.132 "data_offset": 0, 00:08:31.132 "data_size": 65536 00:08:31.132 } 00:08:31.132 ] 00:08:31.132 } 00:08:31.132 } 00:08:31.132 }' 00:08:31.132 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.391 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:31.391 BaseBdev2 00:08:31.391 BaseBdev3' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.392 [2024-11-26 15:24:29.787052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.392 [2024-11-26 15:24:29.787123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.392 [2024-11-26 15:24:29.787201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.392 "name": "Existed_Raid", 00:08:31.392 "uuid": "d9ada207-e64d-4523-a21e-c5005d3762a9", 00:08:31.392 "strip_size_kb": 64, 00:08:31.392 "state": "offline", 00:08:31.392 "raid_level": "concat", 00:08:31.392 "superblock": false, 00:08:31.392 "num_base_bdevs": 3, 00:08:31.392 "num_base_bdevs_discovered": 2, 00:08:31.392 "num_base_bdevs_operational": 2, 00:08:31.392 "base_bdevs_list": [ 00:08:31.392 { 00:08:31.392 "name": null, 00:08:31.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.392 "is_configured": false, 00:08:31.392 "data_offset": 0, 00:08:31.392 "data_size": 65536 00:08:31.392 }, 00:08:31.392 { 00:08:31.392 "name": "BaseBdev2", 00:08:31.392 "uuid": "8ff39fa8-f0c9-485e-aea7-1aa20eefd53e", 00:08:31.392 "is_configured": true, 00:08:31.392 "data_offset": 0, 00:08:31.392 "data_size": 65536 00:08:31.392 }, 00:08:31.392 { 00:08:31.392 "name": "BaseBdev3", 00:08:31.392 "uuid": "83ef8675-99e0-4c84-bef8-8b791af8f616", 00:08:31.392 "is_configured": true, 00:08:31.392 "data_offset": 0, 00:08:31.392 "data_size": 65536 00:08:31.392 } 00:08:31.392 ] 00:08:31.392 }' 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.392 15:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.958 [2024-11-26 15:24:30.302470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.958 [2024-11-26 15:24:30.373632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.958 [2024-11-26 15:24:30.373731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.958 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.959 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.218 BaseBdev2 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.218 [ 00:08:32.218 { 00:08:32.218 "name": "BaseBdev2", 00:08:32.218 "aliases": [ 00:08:32.218 "9f4bf6cb-d2cf-420f-b456-6efc91047c7b" 00:08:32.218 ], 00:08:32.218 "product_name": "Malloc disk", 00:08:32.218 "block_size": 512, 00:08:32.218 "num_blocks": 65536, 00:08:32.218 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:32.218 "assigned_rate_limits": { 00:08:32.218 "rw_ios_per_sec": 0, 00:08:32.218 "rw_mbytes_per_sec": 0, 00:08:32.218 "r_mbytes_per_sec": 0, 00:08:32.218 "w_mbytes_per_sec": 0 00:08:32.218 }, 00:08:32.218 "claimed": false, 00:08:32.218 "zoned": false, 00:08:32.218 "supported_io_types": { 00:08:32.218 "read": true, 00:08:32.218 "write": true, 00:08:32.218 "unmap": true, 00:08:32.218 "flush": true, 00:08:32.218 "reset": true, 00:08:32.218 "nvme_admin": false, 00:08:32.218 "nvme_io": false, 00:08:32.218 "nvme_io_md": false, 00:08:32.218 "write_zeroes": true, 00:08:32.218 "zcopy": true, 00:08:32.218 "get_zone_info": false, 00:08:32.218 "zone_management": false, 00:08:32.218 "zone_append": false, 00:08:32.218 "compare": false, 00:08:32.218 "compare_and_write": false, 00:08:32.218 "abort": true, 00:08:32.218 "seek_hole": false, 00:08:32.218 "seek_data": false, 00:08:32.218 "copy": true, 00:08:32.218 "nvme_iov_md": false 00:08:32.218 }, 00:08:32.218 "memory_domains": [ 00:08:32.218 { 00:08:32.218 "dma_device_id": "system", 00:08:32.218 "dma_device_type": 1 00:08:32.218 }, 00:08:32.218 { 00:08:32.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.218 "dma_device_type": 2 00:08:32.218 } 00:08:32.218 ], 00:08:32.218 "driver_specific": {} 00:08:32.218 } 00:08:32.218 ] 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.218 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.219 BaseBdev3 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.219 [ 00:08:32.219 { 00:08:32.219 "name": "BaseBdev3", 00:08:32.219 "aliases": [ 00:08:32.219 "ce6725b1-8ae1-4c9d-9d85-b103135a76fb" 00:08:32.219 ], 00:08:32.219 "product_name": "Malloc disk", 00:08:32.219 "block_size": 512, 00:08:32.219 "num_blocks": 65536, 00:08:32.219 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:32.219 "assigned_rate_limits": { 00:08:32.219 "rw_ios_per_sec": 0, 00:08:32.219 "rw_mbytes_per_sec": 0, 00:08:32.219 "r_mbytes_per_sec": 0, 00:08:32.219 "w_mbytes_per_sec": 0 00:08:32.219 }, 00:08:32.219 "claimed": false, 00:08:32.219 "zoned": false, 00:08:32.219 "supported_io_types": { 00:08:32.219 "read": true, 00:08:32.219 "write": true, 00:08:32.219 "unmap": true, 00:08:32.219 "flush": true, 00:08:32.219 "reset": true, 00:08:32.219 "nvme_admin": false, 00:08:32.219 "nvme_io": false, 00:08:32.219 "nvme_io_md": false, 00:08:32.219 "write_zeroes": true, 00:08:32.219 "zcopy": true, 00:08:32.219 "get_zone_info": false, 00:08:32.219 "zone_management": false, 00:08:32.219 "zone_append": false, 00:08:32.219 "compare": false, 00:08:32.219 "compare_and_write": false, 00:08:32.219 "abort": true, 00:08:32.219 "seek_hole": false, 00:08:32.219 "seek_data": false, 00:08:32.219 "copy": true, 00:08:32.219 "nvme_iov_md": false 00:08:32.219 }, 00:08:32.219 "memory_domains": [ 00:08:32.219 { 00:08:32.219 "dma_device_id": "system", 00:08:32.219 "dma_device_type": 1 00:08:32.219 }, 00:08:32.219 { 00:08:32.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.219 "dma_device_type": 2 00:08:32.219 } 00:08:32.219 ], 00:08:32.219 "driver_specific": {} 00:08:32.219 } 00:08:32.219 ] 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.219 [2024-11-26 15:24:30.541600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.219 [2024-11-26 15:24:30.541689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.219 [2024-11-26 15:24:30.541731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.219 [2024-11-26 15:24:30.543630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.219 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.219 "name": "Existed_Raid", 00:08:32.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.219 "strip_size_kb": 64, 00:08:32.219 "state": "configuring", 00:08:32.219 "raid_level": "concat", 00:08:32.219 "superblock": false, 00:08:32.219 "num_base_bdevs": 3, 00:08:32.219 "num_base_bdevs_discovered": 2, 00:08:32.219 "num_base_bdevs_operational": 3, 00:08:32.219 "base_bdevs_list": [ 00:08:32.219 { 00:08:32.219 "name": "BaseBdev1", 00:08:32.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.219 "is_configured": false, 00:08:32.219 "data_offset": 0, 00:08:32.219 "data_size": 0 00:08:32.219 }, 00:08:32.219 { 00:08:32.219 "name": "BaseBdev2", 00:08:32.219 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:32.219 "is_configured": true, 00:08:32.219 "data_offset": 0, 00:08:32.219 "data_size": 65536 00:08:32.219 }, 00:08:32.219 { 00:08:32.219 "name": "BaseBdev3", 00:08:32.219 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:32.219 "is_configured": true, 00:08:32.219 "data_offset": 0, 00:08:32.220 "data_size": 65536 00:08:32.220 } 00:08:32.220 ] 00:08:32.220 }' 00:08:32.220 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.220 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.479 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:32.479 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.479 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.479 [2024-11-26 15:24:30.921711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.479 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.479 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.479 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.480 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.739 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.739 "name": "Existed_Raid", 00:08:32.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.739 "strip_size_kb": 64, 00:08:32.739 "state": "configuring", 00:08:32.739 "raid_level": "concat", 00:08:32.739 "superblock": false, 00:08:32.739 "num_base_bdevs": 3, 00:08:32.739 "num_base_bdevs_discovered": 1, 00:08:32.739 "num_base_bdevs_operational": 3, 00:08:32.739 "base_bdevs_list": [ 00:08:32.739 { 00:08:32.739 "name": "BaseBdev1", 00:08:32.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.739 "is_configured": false, 00:08:32.739 "data_offset": 0, 00:08:32.739 "data_size": 0 00:08:32.739 }, 00:08:32.739 { 00:08:32.739 "name": null, 00:08:32.739 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:32.739 "is_configured": false, 00:08:32.739 "data_offset": 0, 00:08:32.739 "data_size": 65536 00:08:32.739 }, 00:08:32.739 { 00:08:32.739 "name": "BaseBdev3", 00:08:32.739 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:32.739 "is_configured": true, 00:08:32.739 "data_offset": 0, 00:08:32.739 "data_size": 65536 00:08:32.739 } 00:08:32.739 ] 00:08:32.739 }' 00:08:32.739 15:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.739 15:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.998 [2024-11-26 15:24:31.380848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.998 BaseBdev1 00:08:32.998 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.999 [ 00:08:32.999 { 00:08:32.999 "name": "BaseBdev1", 00:08:32.999 "aliases": [ 00:08:32.999 "da9d9fb7-7bcc-40f3-b19a-939fa665261b" 00:08:32.999 ], 00:08:32.999 "product_name": "Malloc disk", 00:08:32.999 "block_size": 512, 00:08:32.999 "num_blocks": 65536, 00:08:32.999 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:32.999 "assigned_rate_limits": { 00:08:32.999 "rw_ios_per_sec": 0, 00:08:32.999 "rw_mbytes_per_sec": 0, 00:08:32.999 "r_mbytes_per_sec": 0, 00:08:32.999 "w_mbytes_per_sec": 0 00:08:32.999 }, 00:08:32.999 "claimed": true, 00:08:32.999 "claim_type": "exclusive_write", 00:08:32.999 "zoned": false, 00:08:32.999 "supported_io_types": { 00:08:32.999 "read": true, 00:08:32.999 "write": true, 00:08:32.999 "unmap": true, 00:08:32.999 "flush": true, 00:08:32.999 "reset": true, 00:08:32.999 "nvme_admin": false, 00:08:32.999 "nvme_io": false, 00:08:32.999 "nvme_io_md": false, 00:08:32.999 "write_zeroes": true, 00:08:32.999 "zcopy": true, 00:08:32.999 "get_zone_info": false, 00:08:32.999 "zone_management": false, 00:08:32.999 "zone_append": false, 00:08:32.999 "compare": false, 00:08:32.999 "compare_and_write": false, 00:08:32.999 "abort": true, 00:08:32.999 "seek_hole": false, 00:08:32.999 "seek_data": false, 00:08:32.999 "copy": true, 00:08:32.999 "nvme_iov_md": false 00:08:32.999 }, 00:08:32.999 "memory_domains": [ 00:08:32.999 { 00:08:32.999 "dma_device_id": "system", 00:08:32.999 "dma_device_type": 1 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.999 "dma_device_type": 2 00:08:32.999 } 00:08:32.999 ], 00:08:32.999 "driver_specific": {} 00:08:32.999 } 00:08:32.999 ] 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.999 "name": "Existed_Raid", 00:08:32.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.999 "strip_size_kb": 64, 00:08:32.999 "state": "configuring", 00:08:32.999 "raid_level": "concat", 00:08:32.999 "superblock": false, 00:08:32.999 "num_base_bdevs": 3, 00:08:32.999 "num_base_bdevs_discovered": 2, 00:08:32.999 "num_base_bdevs_operational": 3, 00:08:32.999 "base_bdevs_list": [ 00:08:32.999 { 00:08:32.999 "name": "BaseBdev1", 00:08:32.999 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:32.999 "is_configured": true, 00:08:32.999 "data_offset": 0, 00:08:32.999 "data_size": 65536 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "name": null, 00:08:32.999 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:32.999 "is_configured": false, 00:08:32.999 "data_offset": 0, 00:08:32.999 "data_size": 65536 00:08:32.999 }, 00:08:32.999 { 00:08:32.999 "name": "BaseBdev3", 00:08:32.999 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:32.999 "is_configured": true, 00:08:32.999 "data_offset": 0, 00:08:32.999 "data_size": 65536 00:08:32.999 } 00:08:32.999 ] 00:08:32.999 }' 00:08:32.999 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.258 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.517 [2024-11-26 15:24:31.881052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.517 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.518 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.518 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.518 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.518 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.518 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.518 "name": "Existed_Raid", 00:08:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.518 "strip_size_kb": 64, 00:08:33.518 "state": "configuring", 00:08:33.518 "raid_level": "concat", 00:08:33.518 "superblock": false, 00:08:33.518 "num_base_bdevs": 3, 00:08:33.518 "num_base_bdevs_discovered": 1, 00:08:33.518 "num_base_bdevs_operational": 3, 00:08:33.518 "base_bdevs_list": [ 00:08:33.518 { 00:08:33.518 "name": "BaseBdev1", 00:08:33.518 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:33.518 "is_configured": true, 00:08:33.518 "data_offset": 0, 00:08:33.518 "data_size": 65536 00:08:33.518 }, 00:08:33.518 { 00:08:33.518 "name": null, 00:08:33.518 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:33.518 "is_configured": false, 00:08:33.518 "data_offset": 0, 00:08:33.518 "data_size": 65536 00:08:33.518 }, 00:08:33.518 { 00:08:33.518 "name": null, 00:08:33.518 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:33.518 "is_configured": false, 00:08:33.518 "data_offset": 0, 00:08:33.518 "data_size": 65536 00:08:33.518 } 00:08:33.518 ] 00:08:33.518 }' 00:08:33.518 15:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.518 15:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.087 [2024-11-26 15:24:32.413233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.087 "name": "Existed_Raid", 00:08:34.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.087 "strip_size_kb": 64, 00:08:34.087 "state": "configuring", 00:08:34.087 "raid_level": "concat", 00:08:34.087 "superblock": false, 00:08:34.087 "num_base_bdevs": 3, 00:08:34.087 "num_base_bdevs_discovered": 2, 00:08:34.087 "num_base_bdevs_operational": 3, 00:08:34.087 "base_bdevs_list": [ 00:08:34.087 { 00:08:34.087 "name": "BaseBdev1", 00:08:34.087 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:34.087 "is_configured": true, 00:08:34.087 "data_offset": 0, 00:08:34.087 "data_size": 65536 00:08:34.087 }, 00:08:34.087 { 00:08:34.087 "name": null, 00:08:34.087 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:34.087 "is_configured": false, 00:08:34.087 "data_offset": 0, 00:08:34.087 "data_size": 65536 00:08:34.087 }, 00:08:34.087 { 00:08:34.087 "name": "BaseBdev3", 00:08:34.087 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:34.087 "is_configured": true, 00:08:34.087 "data_offset": 0, 00:08:34.087 "data_size": 65536 00:08:34.087 } 00:08:34.087 ] 00:08:34.087 }' 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.087 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.655 [2024-11-26 15:24:32.877347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.655 "name": "Existed_Raid", 00:08:34.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.655 "strip_size_kb": 64, 00:08:34.655 "state": "configuring", 00:08:34.655 "raid_level": "concat", 00:08:34.655 "superblock": false, 00:08:34.655 "num_base_bdevs": 3, 00:08:34.655 "num_base_bdevs_discovered": 1, 00:08:34.655 "num_base_bdevs_operational": 3, 00:08:34.655 "base_bdevs_list": [ 00:08:34.655 { 00:08:34.655 "name": null, 00:08:34.655 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:34.655 "is_configured": false, 00:08:34.655 "data_offset": 0, 00:08:34.655 "data_size": 65536 00:08:34.655 }, 00:08:34.655 { 00:08:34.655 "name": null, 00:08:34.655 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:34.655 "is_configured": false, 00:08:34.655 "data_offset": 0, 00:08:34.655 "data_size": 65536 00:08:34.655 }, 00:08:34.655 { 00:08:34.655 "name": "BaseBdev3", 00:08:34.655 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:34.655 "is_configured": true, 00:08:34.655 "data_offset": 0, 00:08:34.655 "data_size": 65536 00:08:34.655 } 00:08:34.655 ] 00:08:34.655 }' 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.655 15:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.914 [2024-11-26 15:24:33.383812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.914 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.173 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.173 "name": "Existed_Raid", 00:08:35.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.173 "strip_size_kb": 64, 00:08:35.173 "state": "configuring", 00:08:35.173 "raid_level": "concat", 00:08:35.174 "superblock": false, 00:08:35.174 "num_base_bdevs": 3, 00:08:35.174 "num_base_bdevs_discovered": 2, 00:08:35.174 "num_base_bdevs_operational": 3, 00:08:35.174 "base_bdevs_list": [ 00:08:35.174 { 00:08:35.174 "name": null, 00:08:35.174 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:35.174 "is_configured": false, 00:08:35.174 "data_offset": 0, 00:08:35.174 "data_size": 65536 00:08:35.174 }, 00:08:35.174 { 00:08:35.174 "name": "BaseBdev2", 00:08:35.174 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:35.174 "is_configured": true, 00:08:35.174 "data_offset": 0, 00:08:35.174 "data_size": 65536 00:08:35.174 }, 00:08:35.174 { 00:08:35.174 "name": "BaseBdev3", 00:08:35.174 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:35.174 "is_configured": true, 00:08:35.174 "data_offset": 0, 00:08:35.174 "data_size": 65536 00:08:35.174 } 00:08:35.174 ] 00:08:35.174 }' 00:08:35.174 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.174 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.433 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u da9d9fb7-7bcc-40f3-b19a-939fa665261b 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.693 [2024-11-26 15:24:33.931005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.693 [2024-11-26 15:24:33.931049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.693 [2024-11-26 15:24:33.931056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.693 [2024-11-26 15:24:33.931307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:35.693 [2024-11-26 15:24:33.931434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.693 [2024-11-26 15:24:33.931455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:35.693 [2024-11-26 15:24:33.931636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.693 NewBaseBdev 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.693 [ 00:08:35.693 { 00:08:35.693 "name": "NewBaseBdev", 00:08:35.693 "aliases": [ 00:08:35.693 "da9d9fb7-7bcc-40f3-b19a-939fa665261b" 00:08:35.693 ], 00:08:35.693 "product_name": "Malloc disk", 00:08:35.693 "block_size": 512, 00:08:35.693 "num_blocks": 65536, 00:08:35.693 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:35.693 "assigned_rate_limits": { 00:08:35.693 "rw_ios_per_sec": 0, 00:08:35.693 "rw_mbytes_per_sec": 0, 00:08:35.693 "r_mbytes_per_sec": 0, 00:08:35.693 "w_mbytes_per_sec": 0 00:08:35.693 }, 00:08:35.693 "claimed": true, 00:08:35.693 "claim_type": "exclusive_write", 00:08:35.693 "zoned": false, 00:08:35.693 "supported_io_types": { 00:08:35.693 "read": true, 00:08:35.693 "write": true, 00:08:35.693 "unmap": true, 00:08:35.693 "flush": true, 00:08:35.693 "reset": true, 00:08:35.693 "nvme_admin": false, 00:08:35.693 "nvme_io": false, 00:08:35.693 "nvme_io_md": false, 00:08:35.693 "write_zeroes": true, 00:08:35.693 "zcopy": true, 00:08:35.693 "get_zone_info": false, 00:08:35.693 "zone_management": false, 00:08:35.693 "zone_append": false, 00:08:35.693 "compare": false, 00:08:35.693 "compare_and_write": false, 00:08:35.693 "abort": true, 00:08:35.693 "seek_hole": false, 00:08:35.693 "seek_data": false, 00:08:35.693 "copy": true, 00:08:35.693 "nvme_iov_md": false 00:08:35.693 }, 00:08:35.693 "memory_domains": [ 00:08:35.693 { 00:08:35.693 "dma_device_id": "system", 00:08:35.693 "dma_device_type": 1 00:08:35.693 }, 00:08:35.693 { 00:08:35.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.693 "dma_device_type": 2 00:08:35.693 } 00:08:35.693 ], 00:08:35.693 "driver_specific": {} 00:08:35.693 } 00:08:35.693 ] 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.693 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.694 15:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.694 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.694 "name": "Existed_Raid", 00:08:35.694 "uuid": "658fc70c-a6e0-4bcc-84f1-a49847917c22", 00:08:35.694 "strip_size_kb": 64, 00:08:35.694 "state": "online", 00:08:35.694 "raid_level": "concat", 00:08:35.694 "superblock": false, 00:08:35.694 "num_base_bdevs": 3, 00:08:35.694 "num_base_bdevs_discovered": 3, 00:08:35.694 "num_base_bdevs_operational": 3, 00:08:35.694 "base_bdevs_list": [ 00:08:35.694 { 00:08:35.694 "name": "NewBaseBdev", 00:08:35.694 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:35.694 "is_configured": true, 00:08:35.694 "data_offset": 0, 00:08:35.694 "data_size": 65536 00:08:35.694 }, 00:08:35.694 { 00:08:35.694 "name": "BaseBdev2", 00:08:35.694 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:35.694 "is_configured": true, 00:08:35.694 "data_offset": 0, 00:08:35.694 "data_size": 65536 00:08:35.694 }, 00:08:35.694 { 00:08:35.694 "name": "BaseBdev3", 00:08:35.694 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:35.694 "is_configured": true, 00:08:35.694 "data_offset": 0, 00:08:35.694 "data_size": 65536 00:08:35.694 } 00:08:35.694 ] 00:08:35.694 }' 00:08:35.694 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.694 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.953 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.953 [2024-11-26 15:24:34.415510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.213 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.213 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.213 "name": "Existed_Raid", 00:08:36.213 "aliases": [ 00:08:36.213 "658fc70c-a6e0-4bcc-84f1-a49847917c22" 00:08:36.213 ], 00:08:36.213 "product_name": "Raid Volume", 00:08:36.213 "block_size": 512, 00:08:36.213 "num_blocks": 196608, 00:08:36.213 "uuid": "658fc70c-a6e0-4bcc-84f1-a49847917c22", 00:08:36.213 "assigned_rate_limits": { 00:08:36.213 "rw_ios_per_sec": 0, 00:08:36.213 "rw_mbytes_per_sec": 0, 00:08:36.213 "r_mbytes_per_sec": 0, 00:08:36.213 "w_mbytes_per_sec": 0 00:08:36.213 }, 00:08:36.213 "claimed": false, 00:08:36.213 "zoned": false, 00:08:36.213 "supported_io_types": { 00:08:36.213 "read": true, 00:08:36.213 "write": true, 00:08:36.213 "unmap": true, 00:08:36.213 "flush": true, 00:08:36.213 "reset": true, 00:08:36.214 "nvme_admin": false, 00:08:36.214 "nvme_io": false, 00:08:36.214 "nvme_io_md": false, 00:08:36.214 "write_zeroes": true, 00:08:36.214 "zcopy": false, 00:08:36.214 "get_zone_info": false, 00:08:36.214 "zone_management": false, 00:08:36.214 "zone_append": false, 00:08:36.214 "compare": false, 00:08:36.214 "compare_and_write": false, 00:08:36.214 "abort": false, 00:08:36.214 "seek_hole": false, 00:08:36.214 "seek_data": false, 00:08:36.214 "copy": false, 00:08:36.214 "nvme_iov_md": false 00:08:36.214 }, 00:08:36.214 "memory_domains": [ 00:08:36.214 { 00:08:36.214 "dma_device_id": "system", 00:08:36.214 "dma_device_type": 1 00:08:36.214 }, 00:08:36.214 { 00:08:36.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.214 "dma_device_type": 2 00:08:36.214 }, 00:08:36.214 { 00:08:36.214 "dma_device_id": "system", 00:08:36.214 "dma_device_type": 1 00:08:36.214 }, 00:08:36.214 { 00:08:36.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.214 "dma_device_type": 2 00:08:36.214 }, 00:08:36.214 { 00:08:36.214 "dma_device_id": "system", 00:08:36.214 "dma_device_type": 1 00:08:36.214 }, 00:08:36.214 { 00:08:36.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.214 "dma_device_type": 2 00:08:36.214 } 00:08:36.214 ], 00:08:36.214 "driver_specific": { 00:08:36.214 "raid": { 00:08:36.214 "uuid": "658fc70c-a6e0-4bcc-84f1-a49847917c22", 00:08:36.214 "strip_size_kb": 64, 00:08:36.214 "state": "online", 00:08:36.214 "raid_level": "concat", 00:08:36.214 "superblock": false, 00:08:36.214 "num_base_bdevs": 3, 00:08:36.214 "num_base_bdevs_discovered": 3, 00:08:36.214 "num_base_bdevs_operational": 3, 00:08:36.214 "base_bdevs_list": [ 00:08:36.214 { 00:08:36.214 "name": "NewBaseBdev", 00:08:36.214 "uuid": "da9d9fb7-7bcc-40f3-b19a-939fa665261b", 00:08:36.214 "is_configured": true, 00:08:36.214 "data_offset": 0, 00:08:36.214 "data_size": 65536 00:08:36.214 }, 00:08:36.214 { 00:08:36.214 "name": "BaseBdev2", 00:08:36.214 "uuid": "9f4bf6cb-d2cf-420f-b456-6efc91047c7b", 00:08:36.214 "is_configured": true, 00:08:36.214 "data_offset": 0, 00:08:36.214 "data_size": 65536 00:08:36.214 }, 00:08:36.214 { 00:08:36.214 "name": "BaseBdev3", 00:08:36.214 "uuid": "ce6725b1-8ae1-4c9d-9d85-b103135a76fb", 00:08:36.214 "is_configured": true, 00:08:36.214 "data_offset": 0, 00:08:36.214 "data_size": 65536 00:08:36.214 } 00:08:36.214 ] 00:08:36.214 } 00:08:36.214 } 00:08:36.214 }' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:36.214 BaseBdev2 00:08:36.214 BaseBdev3' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.214 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.474 [2024-11-26 15:24:34.699267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.474 [2024-11-26 15:24:34.699296] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.474 [2024-11-26 15:24:34.699368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.474 [2024-11-26 15:24:34.699426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.474 [2024-11-26 15:24:34.699436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78334 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78334 ']' 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78334 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78334 00:08:36.474 killing process with pid 78334 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78334' 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78334 00:08:36.474 [2024-11-26 15:24:34.747375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.474 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78334 00:08:36.474 [2024-11-26 15:24:34.777539] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.733 ************************************ 00:08:36.733 END TEST raid_state_function_test 00:08:36.733 ************************************ 00:08:36.733 15:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:36.733 00:08:36.733 real 0m8.648s 00:08:36.733 user 0m14.785s 00:08:36.733 sys 0m1.711s 00:08:36.733 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.733 15:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.734 15:24:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:36.734 15:24:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.734 15:24:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.734 15:24:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.734 ************************************ 00:08:36.734 START TEST raid_state_function_test_sb 00:08:36.734 ************************************ 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78938 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78938' 00:08:36.734 Process raid pid: 78938 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78938 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78938 ']' 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.734 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.734 [2024-11-26 15:24:35.152937] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:36.734 [2024-11-26 15:24:35.153141] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.993 [2024-11-26 15:24:35.290585] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.993 [2024-11-26 15:24:35.324396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.993 [2024-11-26 15:24:35.350722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.993 [2024-11-26 15:24:35.393566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.993 [2024-11-26 15:24:35.393602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.562 [2024-11-26 15:24:35.984479] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.562 [2024-11-26 15:24:35.984530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.562 [2024-11-26 15:24:35.984541] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.562 [2024-11-26 15:24:35.984548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.562 [2024-11-26 15:24:35.984560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.562 [2024-11-26 15:24:35.984567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.562 15:24:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.562 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.562 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.562 "name": "Existed_Raid", 00:08:37.562 "uuid": "ba208c1a-af41-47cc-9d79-37a9c837cd90", 00:08:37.562 "strip_size_kb": 64, 00:08:37.562 "state": "configuring", 00:08:37.562 "raid_level": "concat", 00:08:37.562 "superblock": true, 00:08:37.562 "num_base_bdevs": 3, 00:08:37.562 "num_base_bdevs_discovered": 0, 00:08:37.562 "num_base_bdevs_operational": 3, 00:08:37.562 "base_bdevs_list": [ 00:08:37.562 { 00:08:37.562 "name": "BaseBdev1", 00:08:37.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.562 "is_configured": false, 00:08:37.562 "data_offset": 0, 00:08:37.562 "data_size": 0 00:08:37.562 }, 00:08:37.562 { 00:08:37.562 "name": "BaseBdev2", 00:08:37.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.562 "is_configured": false, 00:08:37.562 "data_offset": 0, 00:08:37.562 "data_size": 0 00:08:37.563 }, 00:08:37.563 { 00:08:37.563 "name": "BaseBdev3", 00:08:37.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.563 "is_configured": false, 00:08:37.563 "data_offset": 0, 00:08:37.563 "data_size": 0 00:08:37.563 } 00:08:37.563 ] 00:08:37.563 }' 00:08:37.563 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.563 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.133 [2024-11-26 15:24:36.396483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.133 [2024-11-26 15:24:36.396523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.133 [2024-11-26 15:24:36.404530] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.133 [2024-11-26 15:24:36.404606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.133 [2024-11-26 15:24:36.404637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.133 [2024-11-26 15:24:36.404658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.133 [2024-11-26 15:24:36.404678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.133 [2024-11-26 15:24:36.404700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.133 [2024-11-26 15:24:36.421486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.133 BaseBdev1 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.133 [ 00:08:38.133 { 00:08:38.133 "name": "BaseBdev1", 00:08:38.133 "aliases": [ 00:08:38.133 "cd7e78f7-010a-4dd0-8cd0-1b1a920647f7" 00:08:38.133 ], 00:08:38.133 "product_name": "Malloc disk", 00:08:38.133 "block_size": 512, 00:08:38.133 "num_blocks": 65536, 00:08:38.133 "uuid": "cd7e78f7-010a-4dd0-8cd0-1b1a920647f7", 00:08:38.133 "assigned_rate_limits": { 00:08:38.133 "rw_ios_per_sec": 0, 00:08:38.133 "rw_mbytes_per_sec": 0, 00:08:38.133 "r_mbytes_per_sec": 0, 00:08:38.133 "w_mbytes_per_sec": 0 00:08:38.133 }, 00:08:38.133 "claimed": true, 00:08:38.133 "claim_type": "exclusive_write", 00:08:38.133 "zoned": false, 00:08:38.133 "supported_io_types": { 00:08:38.133 "read": true, 00:08:38.133 "write": true, 00:08:38.133 "unmap": true, 00:08:38.133 "flush": true, 00:08:38.133 "reset": true, 00:08:38.133 "nvme_admin": false, 00:08:38.133 "nvme_io": false, 00:08:38.133 "nvme_io_md": false, 00:08:38.133 "write_zeroes": true, 00:08:38.133 "zcopy": true, 00:08:38.133 "get_zone_info": false, 00:08:38.133 "zone_management": false, 00:08:38.133 "zone_append": false, 00:08:38.133 "compare": false, 00:08:38.133 "compare_and_write": false, 00:08:38.133 "abort": true, 00:08:38.133 "seek_hole": false, 00:08:38.133 "seek_data": false, 00:08:38.133 "copy": true, 00:08:38.133 "nvme_iov_md": false 00:08:38.133 }, 00:08:38.133 "memory_domains": [ 00:08:38.133 { 00:08:38.133 "dma_device_id": "system", 00:08:38.133 "dma_device_type": 1 00:08:38.133 }, 00:08:38.133 { 00:08:38.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.133 "dma_device_type": 2 00:08:38.133 } 00:08:38.133 ], 00:08:38.133 "driver_specific": {} 00:08:38.133 } 00:08:38.133 ] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.133 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.133 "name": "Existed_Raid", 00:08:38.133 "uuid": "11f78692-83d1-4307-995b-d67528acbcec", 00:08:38.133 "strip_size_kb": 64, 00:08:38.133 "state": "configuring", 00:08:38.133 "raid_level": "concat", 00:08:38.133 "superblock": true, 00:08:38.133 "num_base_bdevs": 3, 00:08:38.133 "num_base_bdevs_discovered": 1, 00:08:38.133 "num_base_bdevs_operational": 3, 00:08:38.133 "base_bdevs_list": [ 00:08:38.133 { 00:08:38.133 "name": "BaseBdev1", 00:08:38.133 "uuid": "cd7e78f7-010a-4dd0-8cd0-1b1a920647f7", 00:08:38.133 "is_configured": true, 00:08:38.133 "data_offset": 2048, 00:08:38.133 "data_size": 63488 00:08:38.133 }, 00:08:38.133 { 00:08:38.133 "name": "BaseBdev2", 00:08:38.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.133 "is_configured": false, 00:08:38.133 "data_offset": 0, 00:08:38.133 "data_size": 0 00:08:38.133 }, 00:08:38.133 { 00:08:38.133 "name": "BaseBdev3", 00:08:38.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.133 "is_configured": false, 00:08:38.134 "data_offset": 0, 00:08:38.134 "data_size": 0 00:08:38.134 } 00:08:38.134 ] 00:08:38.134 }' 00:08:38.134 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.134 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.703 [2024-11-26 15:24:36.881665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.703 [2024-11-26 15:24:36.881720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.703 [2024-11-26 15:24:36.893726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.703 [2024-11-26 15:24:36.895629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.703 [2024-11-26 15:24:36.895708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.703 [2024-11-26 15:24:36.895728] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.703 [2024-11-26 15:24:36.895736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.703 "name": "Existed_Raid", 00:08:38.703 "uuid": "82f4f93a-1099-43c0-8527-004dcaa2d043", 00:08:38.703 "strip_size_kb": 64, 00:08:38.703 "state": "configuring", 00:08:38.703 "raid_level": "concat", 00:08:38.703 "superblock": true, 00:08:38.703 "num_base_bdevs": 3, 00:08:38.703 "num_base_bdevs_discovered": 1, 00:08:38.703 "num_base_bdevs_operational": 3, 00:08:38.703 "base_bdevs_list": [ 00:08:38.703 { 00:08:38.703 "name": "BaseBdev1", 00:08:38.703 "uuid": "cd7e78f7-010a-4dd0-8cd0-1b1a920647f7", 00:08:38.703 "is_configured": true, 00:08:38.703 "data_offset": 2048, 00:08:38.703 "data_size": 63488 00:08:38.703 }, 00:08:38.703 { 00:08:38.703 "name": "BaseBdev2", 00:08:38.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.703 "is_configured": false, 00:08:38.703 "data_offset": 0, 00:08:38.703 "data_size": 0 00:08:38.703 }, 00:08:38.703 { 00:08:38.703 "name": "BaseBdev3", 00:08:38.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.703 "is_configured": false, 00:08:38.703 "data_offset": 0, 00:08:38.703 "data_size": 0 00:08:38.703 } 00:08:38.703 ] 00:08:38.703 }' 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.703 15:24:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.961 [2024-11-26 15:24:37.329124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.961 BaseBdev2 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.961 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.961 [ 00:08:38.961 { 00:08:38.961 "name": "BaseBdev2", 00:08:38.961 "aliases": [ 00:08:38.961 "facd74f3-de81-42ea-b02a-f40a0c853038" 00:08:38.961 ], 00:08:38.961 "product_name": "Malloc disk", 00:08:38.961 "block_size": 512, 00:08:38.961 "num_blocks": 65536, 00:08:38.961 "uuid": "facd74f3-de81-42ea-b02a-f40a0c853038", 00:08:38.961 "assigned_rate_limits": { 00:08:38.961 "rw_ios_per_sec": 0, 00:08:38.961 "rw_mbytes_per_sec": 0, 00:08:38.961 "r_mbytes_per_sec": 0, 00:08:38.961 "w_mbytes_per_sec": 0 00:08:38.961 }, 00:08:38.961 "claimed": true, 00:08:38.961 "claim_type": "exclusive_write", 00:08:38.961 "zoned": false, 00:08:38.961 "supported_io_types": { 00:08:38.961 "read": true, 00:08:38.961 "write": true, 00:08:38.961 "unmap": true, 00:08:38.961 "flush": true, 00:08:38.961 "reset": true, 00:08:38.961 "nvme_admin": false, 00:08:38.961 "nvme_io": false, 00:08:38.961 "nvme_io_md": false, 00:08:38.961 "write_zeroes": true, 00:08:38.961 "zcopy": true, 00:08:38.961 "get_zone_info": false, 00:08:38.961 "zone_management": false, 00:08:38.961 "zone_append": false, 00:08:38.961 "compare": false, 00:08:38.961 "compare_and_write": false, 00:08:38.961 "abort": true, 00:08:38.961 "seek_hole": false, 00:08:38.961 "seek_data": false, 00:08:38.961 "copy": true, 00:08:38.961 "nvme_iov_md": false 00:08:38.961 }, 00:08:38.961 "memory_domains": [ 00:08:38.961 { 00:08:38.961 "dma_device_id": "system", 00:08:38.961 "dma_device_type": 1 00:08:38.961 }, 00:08:38.961 { 00:08:38.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.961 "dma_device_type": 2 00:08:38.961 } 00:08:38.961 ], 00:08:38.961 "driver_specific": {} 00:08:38.961 } 00:08:38.961 ] 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.962 "name": "Existed_Raid", 00:08:38.962 "uuid": "82f4f93a-1099-43c0-8527-004dcaa2d043", 00:08:38.962 "strip_size_kb": 64, 00:08:38.962 "state": "configuring", 00:08:38.962 "raid_level": "concat", 00:08:38.962 "superblock": true, 00:08:38.962 "num_base_bdevs": 3, 00:08:38.962 "num_base_bdevs_discovered": 2, 00:08:38.962 "num_base_bdevs_operational": 3, 00:08:38.962 "base_bdevs_list": [ 00:08:38.962 { 00:08:38.962 "name": "BaseBdev1", 00:08:38.962 "uuid": "cd7e78f7-010a-4dd0-8cd0-1b1a920647f7", 00:08:38.962 "is_configured": true, 00:08:38.962 "data_offset": 2048, 00:08:38.962 "data_size": 63488 00:08:38.962 }, 00:08:38.962 { 00:08:38.962 "name": "BaseBdev2", 00:08:38.962 "uuid": "facd74f3-de81-42ea-b02a-f40a0c853038", 00:08:38.962 "is_configured": true, 00:08:38.962 "data_offset": 2048, 00:08:38.962 "data_size": 63488 00:08:38.962 }, 00:08:38.962 { 00:08:38.962 "name": "BaseBdev3", 00:08:38.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.962 "is_configured": false, 00:08:38.962 "data_offset": 0, 00:08:38.962 "data_size": 0 00:08:38.962 } 00:08:38.962 ] 00:08:38.962 }' 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.962 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.528 [2024-11-26 15:24:37.795782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.528 [2024-11-26 15:24:37.796350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:39.528 [2024-11-26 15:24:37.796404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.528 BaseBdev3 00:08:39.528 [2024-11-26 15:24:37.797446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.528 [2024-11-26 15:24:37.797863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:39.528 [2024-11-26 15:24:37.797913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:39.528 [2024-11-26 15:24:37.798340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.528 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.529 [ 00:08:39.529 { 00:08:39.529 "name": "BaseBdev3", 00:08:39.529 "aliases": [ 00:08:39.529 "f83fd9db-e454-45da-b3b7-5ff979d61bc7" 00:08:39.529 ], 00:08:39.529 "product_name": "Malloc disk", 00:08:39.529 "block_size": 512, 00:08:39.529 "num_blocks": 65536, 00:08:39.529 "uuid": "f83fd9db-e454-45da-b3b7-5ff979d61bc7", 00:08:39.529 "assigned_rate_limits": { 00:08:39.529 "rw_ios_per_sec": 0, 00:08:39.529 "rw_mbytes_per_sec": 0, 00:08:39.529 "r_mbytes_per_sec": 0, 00:08:39.529 "w_mbytes_per_sec": 0 00:08:39.529 }, 00:08:39.529 "claimed": true, 00:08:39.529 "claim_type": "exclusive_write", 00:08:39.529 "zoned": false, 00:08:39.529 "supported_io_types": { 00:08:39.529 "read": true, 00:08:39.529 "write": true, 00:08:39.529 "unmap": true, 00:08:39.529 "flush": true, 00:08:39.529 "reset": true, 00:08:39.529 "nvme_admin": false, 00:08:39.529 "nvme_io": false, 00:08:39.529 "nvme_io_md": false, 00:08:39.529 "write_zeroes": true, 00:08:39.529 "zcopy": true, 00:08:39.529 "get_zone_info": false, 00:08:39.529 "zone_management": false, 00:08:39.529 "zone_append": false, 00:08:39.529 "compare": false, 00:08:39.529 "compare_and_write": false, 00:08:39.529 "abort": true, 00:08:39.529 "seek_hole": false, 00:08:39.529 "seek_data": false, 00:08:39.529 "copy": true, 00:08:39.529 "nvme_iov_md": false 00:08:39.529 }, 00:08:39.529 "memory_domains": [ 00:08:39.529 { 00:08:39.529 "dma_device_id": "system", 00:08:39.529 "dma_device_type": 1 00:08:39.529 }, 00:08:39.529 { 00:08:39.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.529 "dma_device_type": 2 00:08:39.529 } 00:08:39.529 ], 00:08:39.529 "driver_specific": {} 00:08:39.529 } 00:08:39.529 ] 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.529 "name": "Existed_Raid", 00:08:39.529 "uuid": "82f4f93a-1099-43c0-8527-004dcaa2d043", 00:08:39.529 "strip_size_kb": 64, 00:08:39.529 "state": "online", 00:08:39.529 "raid_level": "concat", 00:08:39.529 "superblock": true, 00:08:39.529 "num_base_bdevs": 3, 00:08:39.529 "num_base_bdevs_discovered": 3, 00:08:39.529 "num_base_bdevs_operational": 3, 00:08:39.529 "base_bdevs_list": [ 00:08:39.529 { 00:08:39.529 "name": "BaseBdev1", 00:08:39.529 "uuid": "cd7e78f7-010a-4dd0-8cd0-1b1a920647f7", 00:08:39.529 "is_configured": true, 00:08:39.529 "data_offset": 2048, 00:08:39.529 "data_size": 63488 00:08:39.529 }, 00:08:39.529 { 00:08:39.529 "name": "BaseBdev2", 00:08:39.529 "uuid": "facd74f3-de81-42ea-b02a-f40a0c853038", 00:08:39.529 "is_configured": true, 00:08:39.529 "data_offset": 2048, 00:08:39.529 "data_size": 63488 00:08:39.529 }, 00:08:39.529 { 00:08:39.529 "name": "BaseBdev3", 00:08:39.529 "uuid": "f83fd9db-e454-45da-b3b7-5ff979d61bc7", 00:08:39.529 "is_configured": true, 00:08:39.529 "data_offset": 2048, 00:08:39.529 "data_size": 63488 00:08:39.529 } 00:08:39.529 ] 00:08:39.529 }' 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.529 15:24:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.789 [2024-11-26 15:24:38.232178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.789 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.069 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.069 "name": "Existed_Raid", 00:08:40.069 "aliases": [ 00:08:40.069 "82f4f93a-1099-43c0-8527-004dcaa2d043" 00:08:40.069 ], 00:08:40.069 "product_name": "Raid Volume", 00:08:40.069 "block_size": 512, 00:08:40.069 "num_blocks": 190464, 00:08:40.069 "uuid": "82f4f93a-1099-43c0-8527-004dcaa2d043", 00:08:40.069 "assigned_rate_limits": { 00:08:40.069 "rw_ios_per_sec": 0, 00:08:40.069 "rw_mbytes_per_sec": 0, 00:08:40.069 "r_mbytes_per_sec": 0, 00:08:40.069 "w_mbytes_per_sec": 0 00:08:40.069 }, 00:08:40.069 "claimed": false, 00:08:40.069 "zoned": false, 00:08:40.069 "supported_io_types": { 00:08:40.069 "read": true, 00:08:40.069 "write": true, 00:08:40.069 "unmap": true, 00:08:40.069 "flush": true, 00:08:40.069 "reset": true, 00:08:40.069 "nvme_admin": false, 00:08:40.069 "nvme_io": false, 00:08:40.069 "nvme_io_md": false, 00:08:40.069 "write_zeroes": true, 00:08:40.069 "zcopy": false, 00:08:40.069 "get_zone_info": false, 00:08:40.069 "zone_management": false, 00:08:40.069 "zone_append": false, 00:08:40.069 "compare": false, 00:08:40.069 "compare_and_write": false, 00:08:40.069 "abort": false, 00:08:40.069 "seek_hole": false, 00:08:40.069 "seek_data": false, 00:08:40.069 "copy": false, 00:08:40.069 "nvme_iov_md": false 00:08:40.069 }, 00:08:40.069 "memory_domains": [ 00:08:40.069 { 00:08:40.069 "dma_device_id": "system", 00:08:40.069 "dma_device_type": 1 00:08:40.069 }, 00:08:40.069 { 00:08:40.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.069 "dma_device_type": 2 00:08:40.069 }, 00:08:40.069 { 00:08:40.069 "dma_device_id": "system", 00:08:40.069 "dma_device_type": 1 00:08:40.069 }, 00:08:40.069 { 00:08:40.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.069 "dma_device_type": 2 00:08:40.069 }, 00:08:40.069 { 00:08:40.069 "dma_device_id": "system", 00:08:40.069 "dma_device_type": 1 00:08:40.069 }, 00:08:40.069 { 00:08:40.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.069 "dma_device_type": 2 00:08:40.069 } 00:08:40.069 ], 00:08:40.069 "driver_specific": { 00:08:40.069 "raid": { 00:08:40.069 "uuid": "82f4f93a-1099-43c0-8527-004dcaa2d043", 00:08:40.069 "strip_size_kb": 64, 00:08:40.069 "state": "online", 00:08:40.069 "raid_level": "concat", 00:08:40.069 "superblock": true, 00:08:40.070 "num_base_bdevs": 3, 00:08:40.070 "num_base_bdevs_discovered": 3, 00:08:40.070 "num_base_bdevs_operational": 3, 00:08:40.070 "base_bdevs_list": [ 00:08:40.070 { 00:08:40.070 "name": "BaseBdev1", 00:08:40.070 "uuid": "cd7e78f7-010a-4dd0-8cd0-1b1a920647f7", 00:08:40.070 "is_configured": true, 00:08:40.070 "data_offset": 2048, 00:08:40.070 "data_size": 63488 00:08:40.070 }, 00:08:40.070 { 00:08:40.070 "name": "BaseBdev2", 00:08:40.070 "uuid": "facd74f3-de81-42ea-b02a-f40a0c853038", 00:08:40.070 "is_configured": true, 00:08:40.070 "data_offset": 2048, 00:08:40.070 "data_size": 63488 00:08:40.070 }, 00:08:40.070 { 00:08:40.070 "name": "BaseBdev3", 00:08:40.070 "uuid": "f83fd9db-e454-45da-b3b7-5ff979d61bc7", 00:08:40.070 "is_configured": true, 00:08:40.070 "data_offset": 2048, 00:08:40.070 "data_size": 63488 00:08:40.070 } 00:08:40.070 ] 00:08:40.070 } 00:08:40.070 } 00:08:40.070 }' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.070 BaseBdev2 00:08:40.070 BaseBdev3' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.070 [2024-11-26 15:24:38.479995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.070 [2024-11-26 15:24:38.480029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.070 [2024-11-26 15:24:38.480089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.070 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.330 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.330 "name": "Existed_Raid", 00:08:40.330 "uuid": "82f4f93a-1099-43c0-8527-004dcaa2d043", 00:08:40.330 "strip_size_kb": 64, 00:08:40.330 "state": "offline", 00:08:40.330 "raid_level": "concat", 00:08:40.330 "superblock": true, 00:08:40.330 "num_base_bdevs": 3, 00:08:40.330 "num_base_bdevs_discovered": 2, 00:08:40.330 "num_base_bdevs_operational": 2, 00:08:40.330 "base_bdevs_list": [ 00:08:40.330 { 00:08:40.330 "name": null, 00:08:40.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.330 "is_configured": false, 00:08:40.330 "data_offset": 0, 00:08:40.330 "data_size": 63488 00:08:40.330 }, 00:08:40.330 { 00:08:40.330 "name": "BaseBdev2", 00:08:40.330 "uuid": "facd74f3-de81-42ea-b02a-f40a0c853038", 00:08:40.330 "is_configured": true, 00:08:40.330 "data_offset": 2048, 00:08:40.330 "data_size": 63488 00:08:40.330 }, 00:08:40.330 { 00:08:40.330 "name": "BaseBdev3", 00:08:40.330 "uuid": "f83fd9db-e454-45da-b3b7-5ff979d61bc7", 00:08:40.330 "is_configured": true, 00:08:40.330 "data_offset": 2048, 00:08:40.330 "data_size": 63488 00:08:40.330 } 00:08:40.330 ] 00:08:40.330 }' 00:08:40.330 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.330 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.590 [2024-11-26 15:24:38.967408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.590 15:24:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.590 [2024-11-26 15:24:39.038792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.590 [2024-11-26 15:24:39.038862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.590 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.851 BaseBdev2 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.851 [ 00:08:40.851 { 00:08:40.851 "name": "BaseBdev2", 00:08:40.851 "aliases": [ 00:08:40.851 "9ef04240-ec35-407a-8380-777b4d981348" 00:08:40.851 ], 00:08:40.851 "product_name": "Malloc disk", 00:08:40.851 "block_size": 512, 00:08:40.851 "num_blocks": 65536, 00:08:40.851 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:40.851 "assigned_rate_limits": { 00:08:40.851 "rw_ios_per_sec": 0, 00:08:40.851 "rw_mbytes_per_sec": 0, 00:08:40.851 "r_mbytes_per_sec": 0, 00:08:40.851 "w_mbytes_per_sec": 0 00:08:40.851 }, 00:08:40.851 "claimed": false, 00:08:40.851 "zoned": false, 00:08:40.851 "supported_io_types": { 00:08:40.851 "read": true, 00:08:40.851 "write": true, 00:08:40.851 "unmap": true, 00:08:40.851 "flush": true, 00:08:40.851 "reset": true, 00:08:40.851 "nvme_admin": false, 00:08:40.851 "nvme_io": false, 00:08:40.851 "nvme_io_md": false, 00:08:40.851 "write_zeroes": true, 00:08:40.851 "zcopy": true, 00:08:40.851 "get_zone_info": false, 00:08:40.851 "zone_management": false, 00:08:40.851 "zone_append": false, 00:08:40.851 "compare": false, 00:08:40.851 "compare_and_write": false, 00:08:40.851 "abort": true, 00:08:40.851 "seek_hole": false, 00:08:40.851 "seek_data": false, 00:08:40.851 "copy": true, 00:08:40.851 "nvme_iov_md": false 00:08:40.851 }, 00:08:40.851 "memory_domains": [ 00:08:40.851 { 00:08:40.851 "dma_device_id": "system", 00:08:40.851 "dma_device_type": 1 00:08:40.851 }, 00:08:40.851 { 00:08:40.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.851 "dma_device_type": 2 00:08:40.851 } 00:08:40.851 ], 00:08:40.851 "driver_specific": {} 00:08:40.851 } 00:08:40.851 ] 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.851 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.852 BaseBdev3 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.852 [ 00:08:40.852 { 00:08:40.852 "name": "BaseBdev3", 00:08:40.852 "aliases": [ 00:08:40.852 "f61af807-91b0-486b-96f4-07de6a2fe017" 00:08:40.852 ], 00:08:40.852 "product_name": "Malloc disk", 00:08:40.852 "block_size": 512, 00:08:40.852 "num_blocks": 65536, 00:08:40.852 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:40.852 "assigned_rate_limits": { 00:08:40.852 "rw_ios_per_sec": 0, 00:08:40.852 "rw_mbytes_per_sec": 0, 00:08:40.852 "r_mbytes_per_sec": 0, 00:08:40.852 "w_mbytes_per_sec": 0 00:08:40.852 }, 00:08:40.852 "claimed": false, 00:08:40.852 "zoned": false, 00:08:40.852 "supported_io_types": { 00:08:40.852 "read": true, 00:08:40.852 "write": true, 00:08:40.852 "unmap": true, 00:08:40.852 "flush": true, 00:08:40.852 "reset": true, 00:08:40.852 "nvme_admin": false, 00:08:40.852 "nvme_io": false, 00:08:40.852 "nvme_io_md": false, 00:08:40.852 "write_zeroes": true, 00:08:40.852 "zcopy": true, 00:08:40.852 "get_zone_info": false, 00:08:40.852 "zone_management": false, 00:08:40.852 "zone_append": false, 00:08:40.852 "compare": false, 00:08:40.852 "compare_and_write": false, 00:08:40.852 "abort": true, 00:08:40.852 "seek_hole": false, 00:08:40.852 "seek_data": false, 00:08:40.852 "copy": true, 00:08:40.852 "nvme_iov_md": false 00:08:40.852 }, 00:08:40.852 "memory_domains": [ 00:08:40.852 { 00:08:40.852 "dma_device_id": "system", 00:08:40.852 "dma_device_type": 1 00:08:40.852 }, 00:08:40.852 { 00:08:40.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.852 "dma_device_type": 2 00:08:40.852 } 00:08:40.852 ], 00:08:40.852 "driver_specific": {} 00:08:40.852 } 00:08:40.852 ] 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.852 [2024-11-26 15:24:39.198991] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.852 [2024-11-26 15:24:39.199036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.852 [2024-11-26 15:24:39.199055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.852 [2024-11-26 15:24:39.201032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.852 "name": "Existed_Raid", 00:08:40.852 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:40.852 "strip_size_kb": 64, 00:08:40.852 "state": "configuring", 00:08:40.852 "raid_level": "concat", 00:08:40.852 "superblock": true, 00:08:40.852 "num_base_bdevs": 3, 00:08:40.852 "num_base_bdevs_discovered": 2, 00:08:40.852 "num_base_bdevs_operational": 3, 00:08:40.852 "base_bdevs_list": [ 00:08:40.852 { 00:08:40.852 "name": "BaseBdev1", 00:08:40.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.852 "is_configured": false, 00:08:40.852 "data_offset": 0, 00:08:40.852 "data_size": 0 00:08:40.852 }, 00:08:40.852 { 00:08:40.852 "name": "BaseBdev2", 00:08:40.852 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:40.852 "is_configured": true, 00:08:40.852 "data_offset": 2048, 00:08:40.852 "data_size": 63488 00:08:40.852 }, 00:08:40.852 { 00:08:40.852 "name": "BaseBdev3", 00:08:40.852 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:40.852 "is_configured": true, 00:08:40.852 "data_offset": 2048, 00:08:40.852 "data_size": 63488 00:08:40.852 } 00:08:40.852 ] 00:08:40.852 }' 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.852 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.421 [2024-11-26 15:24:39.599093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.421 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.422 "name": "Existed_Raid", 00:08:41.422 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:41.422 "strip_size_kb": 64, 00:08:41.422 "state": "configuring", 00:08:41.422 "raid_level": "concat", 00:08:41.422 "superblock": true, 00:08:41.422 "num_base_bdevs": 3, 00:08:41.422 "num_base_bdevs_discovered": 1, 00:08:41.422 "num_base_bdevs_operational": 3, 00:08:41.422 "base_bdevs_list": [ 00:08:41.422 { 00:08:41.422 "name": "BaseBdev1", 00:08:41.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.422 "is_configured": false, 00:08:41.422 "data_offset": 0, 00:08:41.422 "data_size": 0 00:08:41.422 }, 00:08:41.422 { 00:08:41.422 "name": null, 00:08:41.422 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:41.422 "is_configured": false, 00:08:41.422 "data_offset": 0, 00:08:41.422 "data_size": 63488 00:08:41.422 }, 00:08:41.422 { 00:08:41.422 "name": "BaseBdev3", 00:08:41.422 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:41.422 "is_configured": true, 00:08:41.422 "data_offset": 2048, 00:08:41.422 "data_size": 63488 00:08:41.422 } 00:08:41.422 ] 00:08:41.422 }' 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.422 15:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.682 [2024-11-26 15:24:40.090131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.682 BaseBdev1 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.682 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.682 [ 00:08:41.682 { 00:08:41.682 "name": "BaseBdev1", 00:08:41.682 "aliases": [ 00:08:41.682 "98287df9-185f-468d-b443-d7d320d1d2b1" 00:08:41.682 ], 00:08:41.682 "product_name": "Malloc disk", 00:08:41.682 "block_size": 512, 00:08:41.682 "num_blocks": 65536, 00:08:41.682 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:41.682 "assigned_rate_limits": { 00:08:41.682 "rw_ios_per_sec": 0, 00:08:41.682 "rw_mbytes_per_sec": 0, 00:08:41.682 "r_mbytes_per_sec": 0, 00:08:41.682 "w_mbytes_per_sec": 0 00:08:41.682 }, 00:08:41.682 "claimed": true, 00:08:41.682 "claim_type": "exclusive_write", 00:08:41.682 "zoned": false, 00:08:41.682 "supported_io_types": { 00:08:41.682 "read": true, 00:08:41.682 "write": true, 00:08:41.682 "unmap": true, 00:08:41.682 "flush": true, 00:08:41.682 "reset": true, 00:08:41.682 "nvme_admin": false, 00:08:41.682 "nvme_io": false, 00:08:41.682 "nvme_io_md": false, 00:08:41.683 "write_zeroes": true, 00:08:41.683 "zcopy": true, 00:08:41.683 "get_zone_info": false, 00:08:41.683 "zone_management": false, 00:08:41.683 "zone_append": false, 00:08:41.683 "compare": false, 00:08:41.683 "compare_and_write": false, 00:08:41.683 "abort": true, 00:08:41.683 "seek_hole": false, 00:08:41.683 "seek_data": false, 00:08:41.683 "copy": true, 00:08:41.683 "nvme_iov_md": false 00:08:41.683 }, 00:08:41.683 "memory_domains": [ 00:08:41.683 { 00:08:41.683 "dma_device_id": "system", 00:08:41.683 "dma_device_type": 1 00:08:41.683 }, 00:08:41.683 { 00:08:41.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.683 "dma_device_type": 2 00:08:41.683 } 00:08:41.683 ], 00:08:41.683 "driver_specific": {} 00:08:41.683 } 00:08:41.683 ] 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.683 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.943 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.943 "name": "Existed_Raid", 00:08:41.943 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:41.943 "strip_size_kb": 64, 00:08:41.943 "state": "configuring", 00:08:41.943 "raid_level": "concat", 00:08:41.943 "superblock": true, 00:08:41.943 "num_base_bdevs": 3, 00:08:41.943 "num_base_bdevs_discovered": 2, 00:08:41.943 "num_base_bdevs_operational": 3, 00:08:41.943 "base_bdevs_list": [ 00:08:41.943 { 00:08:41.943 "name": "BaseBdev1", 00:08:41.943 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:41.943 "is_configured": true, 00:08:41.943 "data_offset": 2048, 00:08:41.943 "data_size": 63488 00:08:41.943 }, 00:08:41.943 { 00:08:41.943 "name": null, 00:08:41.943 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:41.943 "is_configured": false, 00:08:41.943 "data_offset": 0, 00:08:41.943 "data_size": 63488 00:08:41.943 }, 00:08:41.943 { 00:08:41.943 "name": "BaseBdev3", 00:08:41.943 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:41.943 "is_configured": true, 00:08:41.943 "data_offset": 2048, 00:08:41.943 "data_size": 63488 00:08:41.943 } 00:08:41.943 ] 00:08:41.943 }' 00:08:41.943 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.943 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.204 [2024-11-26 15:24:40.598369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.204 "name": "Existed_Raid", 00:08:42.204 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:42.204 "strip_size_kb": 64, 00:08:42.204 "state": "configuring", 00:08:42.204 "raid_level": "concat", 00:08:42.204 "superblock": true, 00:08:42.204 "num_base_bdevs": 3, 00:08:42.204 "num_base_bdevs_discovered": 1, 00:08:42.204 "num_base_bdevs_operational": 3, 00:08:42.204 "base_bdevs_list": [ 00:08:42.204 { 00:08:42.204 "name": "BaseBdev1", 00:08:42.204 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:42.204 "is_configured": true, 00:08:42.204 "data_offset": 2048, 00:08:42.204 "data_size": 63488 00:08:42.204 }, 00:08:42.204 { 00:08:42.204 "name": null, 00:08:42.204 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:42.204 "is_configured": false, 00:08:42.204 "data_offset": 0, 00:08:42.204 "data_size": 63488 00:08:42.204 }, 00:08:42.204 { 00:08:42.204 "name": null, 00:08:42.204 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:42.204 "is_configured": false, 00:08:42.204 "data_offset": 0, 00:08:42.204 "data_size": 63488 00:08:42.204 } 00:08:42.204 ] 00:08:42.204 }' 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.204 15:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 [2024-11-26 15:24:41.054526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.774 "name": "Existed_Raid", 00:08:42.774 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:42.774 "strip_size_kb": 64, 00:08:42.774 "state": "configuring", 00:08:42.774 "raid_level": "concat", 00:08:42.774 "superblock": true, 00:08:42.774 "num_base_bdevs": 3, 00:08:42.774 "num_base_bdevs_discovered": 2, 00:08:42.774 "num_base_bdevs_operational": 3, 00:08:42.774 "base_bdevs_list": [ 00:08:42.774 { 00:08:42.774 "name": "BaseBdev1", 00:08:42.774 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:42.774 "is_configured": true, 00:08:42.774 "data_offset": 2048, 00:08:42.774 "data_size": 63488 00:08:42.774 }, 00:08:42.774 { 00:08:42.774 "name": null, 00:08:42.774 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:42.774 "is_configured": false, 00:08:42.774 "data_offset": 0, 00:08:42.774 "data_size": 63488 00:08:42.774 }, 00:08:42.774 { 00:08:42.774 "name": "BaseBdev3", 00:08:42.774 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:42.774 "is_configured": true, 00:08:42.774 "data_offset": 2048, 00:08:42.774 "data_size": 63488 00:08:42.774 } 00:08:42.774 ] 00:08:42.774 }' 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.774 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.033 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.292 [2024-11-26 15:24:41.510656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.292 "name": "Existed_Raid", 00:08:43.292 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:43.292 "strip_size_kb": 64, 00:08:43.292 "state": "configuring", 00:08:43.292 "raid_level": "concat", 00:08:43.292 "superblock": true, 00:08:43.292 "num_base_bdevs": 3, 00:08:43.292 "num_base_bdevs_discovered": 1, 00:08:43.292 "num_base_bdevs_operational": 3, 00:08:43.292 "base_bdevs_list": [ 00:08:43.292 { 00:08:43.292 "name": null, 00:08:43.292 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:43.292 "is_configured": false, 00:08:43.292 "data_offset": 0, 00:08:43.292 "data_size": 63488 00:08:43.292 }, 00:08:43.292 { 00:08:43.292 "name": null, 00:08:43.292 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:43.292 "is_configured": false, 00:08:43.292 "data_offset": 0, 00:08:43.292 "data_size": 63488 00:08:43.292 }, 00:08:43.292 { 00:08:43.292 "name": "BaseBdev3", 00:08:43.292 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:43.292 "is_configured": true, 00:08:43.292 "data_offset": 2048, 00:08:43.292 "data_size": 63488 00:08:43.292 } 00:08:43.292 ] 00:08:43.292 }' 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.292 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.552 [2024-11-26 15:24:41.985315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.552 15:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.552 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.812 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.812 "name": "Existed_Raid", 00:08:43.812 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:43.812 "strip_size_kb": 64, 00:08:43.812 "state": "configuring", 00:08:43.812 "raid_level": "concat", 00:08:43.812 "superblock": true, 00:08:43.812 "num_base_bdevs": 3, 00:08:43.812 "num_base_bdevs_discovered": 2, 00:08:43.812 "num_base_bdevs_operational": 3, 00:08:43.812 "base_bdevs_list": [ 00:08:43.812 { 00:08:43.812 "name": null, 00:08:43.812 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:43.812 "is_configured": false, 00:08:43.812 "data_offset": 0, 00:08:43.812 "data_size": 63488 00:08:43.812 }, 00:08:43.812 { 00:08:43.812 "name": "BaseBdev2", 00:08:43.812 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:43.812 "is_configured": true, 00:08:43.812 "data_offset": 2048, 00:08:43.812 "data_size": 63488 00:08:43.812 }, 00:08:43.812 { 00:08:43.812 "name": "BaseBdev3", 00:08:43.812 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:43.812 "is_configured": true, 00:08:43.812 "data_offset": 2048, 00:08:43.812 "data_size": 63488 00:08:43.812 } 00:08:43.812 ] 00:08:43.812 }' 00:08:43.812 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.812 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 98287df9-185f-468d-b443-d7d320d1d2b1 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.072 [2024-11-26 15:24:42.500837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:44.072 [2024-11-26 15:24:42.501024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:44.072 [2024-11-26 15:24:42.501038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.072 [2024-11-26 15:24:42.501346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:44.072 NewBaseBdev 00:08:44.072 [2024-11-26 15:24:42.501485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:44.072 [2024-11-26 15:24:42.501504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:44.072 [2024-11-26 15:24:42.501621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.072 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.072 [ 00:08:44.072 { 00:08:44.072 "name": "NewBaseBdev", 00:08:44.072 "aliases": [ 00:08:44.072 "98287df9-185f-468d-b443-d7d320d1d2b1" 00:08:44.072 ], 00:08:44.072 "product_name": "Malloc disk", 00:08:44.072 "block_size": 512, 00:08:44.072 "num_blocks": 65536, 00:08:44.072 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:44.072 "assigned_rate_limits": { 00:08:44.072 "rw_ios_per_sec": 0, 00:08:44.072 "rw_mbytes_per_sec": 0, 00:08:44.072 "r_mbytes_per_sec": 0, 00:08:44.072 "w_mbytes_per_sec": 0 00:08:44.072 }, 00:08:44.072 "claimed": true, 00:08:44.072 "claim_type": "exclusive_write", 00:08:44.072 "zoned": false, 00:08:44.072 "supported_io_types": { 00:08:44.072 "read": true, 00:08:44.072 "write": true, 00:08:44.072 "unmap": true, 00:08:44.072 "flush": true, 00:08:44.072 "reset": true, 00:08:44.072 "nvme_admin": false, 00:08:44.072 "nvme_io": false, 00:08:44.073 "nvme_io_md": false, 00:08:44.073 "write_zeroes": true, 00:08:44.073 "zcopy": true, 00:08:44.073 "get_zone_info": false, 00:08:44.073 "zone_management": false, 00:08:44.073 "zone_append": false, 00:08:44.073 "compare": false, 00:08:44.073 "compare_and_write": false, 00:08:44.073 "abort": true, 00:08:44.073 "seek_hole": false, 00:08:44.073 "seek_data": false, 00:08:44.073 "copy": true, 00:08:44.073 "nvme_iov_md": false 00:08:44.073 }, 00:08:44.073 "memory_domains": [ 00:08:44.073 { 00:08:44.073 "dma_device_id": "system", 00:08:44.073 "dma_device_type": 1 00:08:44.073 }, 00:08:44.073 { 00:08:44.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.073 "dma_device_type": 2 00:08:44.073 } 00:08:44.073 ], 00:08:44.073 "driver_specific": {} 00:08:44.073 } 00:08:44.073 ] 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.073 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.333 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.333 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.333 "name": "Existed_Raid", 00:08:44.333 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:44.333 "strip_size_kb": 64, 00:08:44.333 "state": "online", 00:08:44.333 "raid_level": "concat", 00:08:44.333 "superblock": true, 00:08:44.333 "num_base_bdevs": 3, 00:08:44.333 "num_base_bdevs_discovered": 3, 00:08:44.333 "num_base_bdevs_operational": 3, 00:08:44.333 "base_bdevs_list": [ 00:08:44.333 { 00:08:44.333 "name": "NewBaseBdev", 00:08:44.333 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:44.333 "is_configured": true, 00:08:44.333 "data_offset": 2048, 00:08:44.333 "data_size": 63488 00:08:44.333 }, 00:08:44.333 { 00:08:44.333 "name": "BaseBdev2", 00:08:44.333 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:44.333 "is_configured": true, 00:08:44.333 "data_offset": 2048, 00:08:44.333 "data_size": 63488 00:08:44.333 }, 00:08:44.333 { 00:08:44.333 "name": "BaseBdev3", 00:08:44.333 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:44.333 "is_configured": true, 00:08:44.333 "data_offset": 2048, 00:08:44.333 "data_size": 63488 00:08:44.333 } 00:08:44.333 ] 00:08:44.333 }' 00:08:44.333 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.333 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.594 [2024-11-26 15:24:42.953353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.594 "name": "Existed_Raid", 00:08:44.594 "aliases": [ 00:08:44.594 "c9394409-715e-43c7-a7fd-eb537cb1468e" 00:08:44.594 ], 00:08:44.594 "product_name": "Raid Volume", 00:08:44.594 "block_size": 512, 00:08:44.594 "num_blocks": 190464, 00:08:44.594 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:44.594 "assigned_rate_limits": { 00:08:44.594 "rw_ios_per_sec": 0, 00:08:44.594 "rw_mbytes_per_sec": 0, 00:08:44.594 "r_mbytes_per_sec": 0, 00:08:44.594 "w_mbytes_per_sec": 0 00:08:44.594 }, 00:08:44.594 "claimed": false, 00:08:44.594 "zoned": false, 00:08:44.594 "supported_io_types": { 00:08:44.594 "read": true, 00:08:44.594 "write": true, 00:08:44.594 "unmap": true, 00:08:44.594 "flush": true, 00:08:44.594 "reset": true, 00:08:44.594 "nvme_admin": false, 00:08:44.594 "nvme_io": false, 00:08:44.594 "nvme_io_md": false, 00:08:44.594 "write_zeroes": true, 00:08:44.594 "zcopy": false, 00:08:44.594 "get_zone_info": false, 00:08:44.594 "zone_management": false, 00:08:44.594 "zone_append": false, 00:08:44.594 "compare": false, 00:08:44.594 "compare_and_write": false, 00:08:44.594 "abort": false, 00:08:44.594 "seek_hole": false, 00:08:44.594 "seek_data": false, 00:08:44.594 "copy": false, 00:08:44.594 "nvme_iov_md": false 00:08:44.594 }, 00:08:44.594 "memory_domains": [ 00:08:44.594 { 00:08:44.594 "dma_device_id": "system", 00:08:44.594 "dma_device_type": 1 00:08:44.594 }, 00:08:44.594 { 00:08:44.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.594 "dma_device_type": 2 00:08:44.594 }, 00:08:44.594 { 00:08:44.594 "dma_device_id": "system", 00:08:44.594 "dma_device_type": 1 00:08:44.594 }, 00:08:44.594 { 00:08:44.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.594 "dma_device_type": 2 00:08:44.594 }, 00:08:44.594 { 00:08:44.594 "dma_device_id": "system", 00:08:44.594 "dma_device_type": 1 00:08:44.594 }, 00:08:44.594 { 00:08:44.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.594 "dma_device_type": 2 00:08:44.594 } 00:08:44.594 ], 00:08:44.594 "driver_specific": { 00:08:44.594 "raid": { 00:08:44.594 "uuid": "c9394409-715e-43c7-a7fd-eb537cb1468e", 00:08:44.594 "strip_size_kb": 64, 00:08:44.594 "state": "online", 00:08:44.594 "raid_level": "concat", 00:08:44.594 "superblock": true, 00:08:44.594 "num_base_bdevs": 3, 00:08:44.594 "num_base_bdevs_discovered": 3, 00:08:44.594 "num_base_bdevs_operational": 3, 00:08:44.594 "base_bdevs_list": [ 00:08:44.594 { 00:08:44.594 "name": "NewBaseBdev", 00:08:44.594 "uuid": "98287df9-185f-468d-b443-d7d320d1d2b1", 00:08:44.594 "is_configured": true, 00:08:44.594 "data_offset": 2048, 00:08:44.594 "data_size": 63488 00:08:44.594 }, 00:08:44.594 { 00:08:44.594 "name": "BaseBdev2", 00:08:44.594 "uuid": "9ef04240-ec35-407a-8380-777b4d981348", 00:08:44.594 "is_configured": true, 00:08:44.594 "data_offset": 2048, 00:08:44.594 "data_size": 63488 00:08:44.594 }, 00:08:44.594 { 00:08:44.594 "name": "BaseBdev3", 00:08:44.594 "uuid": "f61af807-91b0-486b-96f4-07de6a2fe017", 00:08:44.594 "is_configured": true, 00:08:44.594 "data_offset": 2048, 00:08:44.594 "data_size": 63488 00:08:44.594 } 00:08:44.594 ] 00:08:44.594 } 00:08:44.594 } 00:08:44.594 }' 00:08:44.594 15:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.594 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:44.594 BaseBdev2 00:08:44.594 BaseBdev3' 00:08:44.594 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.855 [2024-11-26 15:24:43.213104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.855 [2024-11-26 15:24:43.213142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.855 [2024-11-26 15:24:43.213236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.855 [2024-11-26 15:24:43.213297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.855 [2024-11-26 15:24:43.213307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78938 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78938 ']' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78938 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78938 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.855 killing process with pid 78938 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78938' 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78938 00:08:44.855 [2024-11-26 15:24:43.262682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.855 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78938 00:08:44.855 [2024-11-26 15:24:43.294270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.115 15:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:45.115 00:08:45.115 real 0m8.450s 00:08:45.115 user 0m14.436s 00:08:45.115 sys 0m1.715s 00:08:45.115 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.115 ************************************ 00:08:45.115 END TEST raid_state_function_test_sb 00:08:45.115 ************************************ 00:08:45.115 15:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.115 15:24:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:45.115 15:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:45.115 15:24:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.115 15:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.115 ************************************ 00:08:45.115 START TEST raid_superblock_test 00:08:45.115 ************************************ 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79542 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79542 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79542 ']' 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.115 15:24:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.376 [2024-11-26 15:24:43.665894] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:45.376 [2024-11-26 15:24:43.666053] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79542 ] 00:08:45.376 [2024-11-26 15:24:43.801298] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.376 [2024-11-26 15:24:43.838484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.636 [2024-11-26 15:24:43.867070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.636 [2024-11-26 15:24:43.909984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.636 [2024-11-26 15:24:43.910026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.207 malloc1 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.207 [2024-11-26 15:24:44.526447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.207 [2024-11-26 15:24:44.526536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.207 [2024-11-26 15:24:44.526568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:46.207 [2024-11-26 15:24:44.526581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.207 [2024-11-26 15:24:44.528795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.207 [2024-11-26 15:24:44.528836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.207 pt1 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.207 malloc2 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.207 [2024-11-26 15:24:44.555512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.207 [2024-11-26 15:24:44.555568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.207 [2024-11-26 15:24:44.555587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:46.207 [2024-11-26 15:24:44.555597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.207 [2024-11-26 15:24:44.557946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.207 [2024-11-26 15:24:44.557983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.207 pt2 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.207 malloc3 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.207 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.208 [2024-11-26 15:24:44.584552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.208 [2024-11-26 15:24:44.584607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.208 [2024-11-26 15:24:44.584627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:46.208 [2024-11-26 15:24:44.584636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.208 [2024-11-26 15:24:44.586854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.208 [2024-11-26 15:24:44.586907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.208 pt3 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.208 [2024-11-26 15:24:44.596618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.208 [2024-11-26 15:24:44.598525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.208 [2024-11-26 15:24:44.598594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.208 [2024-11-26 15:24:44.598756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:46.208 [2024-11-26 15:24:44.598776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:46.208 [2024-11-26 15:24:44.599081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:46.208 [2024-11-26 15:24:44.599262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:46.208 [2024-11-26 15:24:44.599286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:46.208 [2024-11-26 15:24:44.599443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.208 "name": "raid_bdev1", 00:08:46.208 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:46.208 "strip_size_kb": 64, 00:08:46.208 "state": "online", 00:08:46.208 "raid_level": "concat", 00:08:46.208 "superblock": true, 00:08:46.208 "num_base_bdevs": 3, 00:08:46.208 "num_base_bdevs_discovered": 3, 00:08:46.208 "num_base_bdevs_operational": 3, 00:08:46.208 "base_bdevs_list": [ 00:08:46.208 { 00:08:46.208 "name": "pt1", 00:08:46.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.208 "is_configured": true, 00:08:46.208 "data_offset": 2048, 00:08:46.208 "data_size": 63488 00:08:46.208 }, 00:08:46.208 { 00:08:46.208 "name": "pt2", 00:08:46.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.208 "is_configured": true, 00:08:46.208 "data_offset": 2048, 00:08:46.208 "data_size": 63488 00:08:46.208 }, 00:08:46.208 { 00:08:46.208 "name": "pt3", 00:08:46.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.208 "is_configured": true, 00:08:46.208 "data_offset": 2048, 00:08:46.208 "data_size": 63488 00:08:46.208 } 00:08:46.208 ] 00:08:46.208 }' 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.208 15:24:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.781 [2024-11-26 15:24:45.049015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.781 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.782 "name": "raid_bdev1", 00:08:46.782 "aliases": [ 00:08:46.782 "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f" 00:08:46.782 ], 00:08:46.782 "product_name": "Raid Volume", 00:08:46.782 "block_size": 512, 00:08:46.782 "num_blocks": 190464, 00:08:46.782 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:46.782 "assigned_rate_limits": { 00:08:46.782 "rw_ios_per_sec": 0, 00:08:46.782 "rw_mbytes_per_sec": 0, 00:08:46.782 "r_mbytes_per_sec": 0, 00:08:46.782 "w_mbytes_per_sec": 0 00:08:46.782 }, 00:08:46.782 "claimed": false, 00:08:46.782 "zoned": false, 00:08:46.782 "supported_io_types": { 00:08:46.782 "read": true, 00:08:46.782 "write": true, 00:08:46.782 "unmap": true, 00:08:46.782 "flush": true, 00:08:46.782 "reset": true, 00:08:46.782 "nvme_admin": false, 00:08:46.782 "nvme_io": false, 00:08:46.782 "nvme_io_md": false, 00:08:46.782 "write_zeroes": true, 00:08:46.782 "zcopy": false, 00:08:46.782 "get_zone_info": false, 00:08:46.782 "zone_management": false, 00:08:46.782 "zone_append": false, 00:08:46.782 "compare": false, 00:08:46.782 "compare_and_write": false, 00:08:46.782 "abort": false, 00:08:46.782 "seek_hole": false, 00:08:46.782 "seek_data": false, 00:08:46.782 "copy": false, 00:08:46.782 "nvme_iov_md": false 00:08:46.782 }, 00:08:46.782 "memory_domains": [ 00:08:46.782 { 00:08:46.782 "dma_device_id": "system", 00:08:46.782 "dma_device_type": 1 00:08:46.782 }, 00:08:46.782 { 00:08:46.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.782 "dma_device_type": 2 00:08:46.782 }, 00:08:46.782 { 00:08:46.782 "dma_device_id": "system", 00:08:46.782 "dma_device_type": 1 00:08:46.782 }, 00:08:46.782 { 00:08:46.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.782 "dma_device_type": 2 00:08:46.782 }, 00:08:46.782 { 00:08:46.782 "dma_device_id": "system", 00:08:46.782 "dma_device_type": 1 00:08:46.782 }, 00:08:46.782 { 00:08:46.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.782 "dma_device_type": 2 00:08:46.782 } 00:08:46.782 ], 00:08:46.782 "driver_specific": { 00:08:46.782 "raid": { 00:08:46.782 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:46.782 "strip_size_kb": 64, 00:08:46.782 "state": "online", 00:08:46.782 "raid_level": "concat", 00:08:46.782 "superblock": true, 00:08:46.782 "num_base_bdevs": 3, 00:08:46.782 "num_base_bdevs_discovered": 3, 00:08:46.782 "num_base_bdevs_operational": 3, 00:08:46.782 "base_bdevs_list": [ 00:08:46.782 { 00:08:46.782 "name": "pt1", 00:08:46.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:46.782 "is_configured": true, 00:08:46.782 "data_offset": 2048, 00:08:46.782 "data_size": 63488 00:08:46.782 }, 00:08:46.782 { 00:08:46.782 "name": "pt2", 00:08:46.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.782 "is_configured": true, 00:08:46.782 "data_offset": 2048, 00:08:46.782 "data_size": 63488 00:08:46.782 }, 00:08:46.782 { 00:08:46.782 "name": "pt3", 00:08:46.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.782 "is_configured": true, 00:08:46.782 "data_offset": 2048, 00:08:46.782 "data_size": 63488 00:08:46.782 } 00:08:46.782 ] 00:08:46.782 } 00:08:46.782 } 00:08:46.782 }' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:46.782 pt2 00:08:46.782 pt3' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.782 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 [2024-11-26 15:24:45.257072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ebc2fac8-6692-4316-b9b9-0cfefbc10c1f 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ebc2fac8-6692-4316-b9b9-0cfefbc10c1f ']' 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 [2024-11-26 15:24:45.288756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.043 [2024-11-26 15:24:45.288807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.043 [2024-11-26 15:24:45.288903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.043 [2024-11-26 15:24:45.288988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.043 [2024-11-26 15:24:45.289011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 [2024-11-26 15:24:45.424863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:47.043 [2024-11-26 15:24:45.426814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:47.043 [2024-11-26 15:24:45.426875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:47.043 [2024-11-26 15:24:45.426923] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:47.043 [2024-11-26 15:24:45.426972] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:47.043 [2024-11-26 15:24:45.426993] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:47.043 [2024-11-26 15:24:45.427007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.043 [2024-11-26 15:24:45.427017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:47.043 request: 00:08:47.043 { 00:08:47.043 "name": "raid_bdev1", 00:08:47.043 "raid_level": "concat", 00:08:47.043 "base_bdevs": [ 00:08:47.043 "malloc1", 00:08:47.043 "malloc2", 00:08:47.043 "malloc3" 00:08:47.043 ], 00:08:47.043 "strip_size_kb": 64, 00:08:47.043 "superblock": false, 00:08:47.043 "method": "bdev_raid_create", 00:08:47.043 "req_id": 1 00:08:47.043 } 00:08:47.043 Got JSON-RPC error response 00:08:47.043 response: 00:08:47.043 { 00:08:47.043 "code": -17, 00:08:47.043 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:47.043 } 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:47.043 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.044 [2024-11-26 15:24:45.488820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.044 [2024-11-26 15:24:45.488910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.044 [2024-11-26 15:24:45.488930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:47.044 [2024-11-26 15:24:45.488940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.044 [2024-11-26 15:24:45.491174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.044 [2024-11-26 15:24:45.491218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.044 [2024-11-26 15:24:45.491314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:47.044 [2024-11-26 15:24:45.491372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.044 pt1 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.044 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.304 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.304 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.304 "name": "raid_bdev1", 00:08:47.304 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:47.304 "strip_size_kb": 64, 00:08:47.304 "state": "configuring", 00:08:47.304 "raid_level": "concat", 00:08:47.304 "superblock": true, 00:08:47.304 "num_base_bdevs": 3, 00:08:47.304 "num_base_bdevs_discovered": 1, 00:08:47.304 "num_base_bdevs_operational": 3, 00:08:47.304 "base_bdevs_list": [ 00:08:47.304 { 00:08:47.304 "name": "pt1", 00:08:47.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.304 "is_configured": true, 00:08:47.304 "data_offset": 2048, 00:08:47.304 "data_size": 63488 00:08:47.304 }, 00:08:47.304 { 00:08:47.304 "name": null, 00:08:47.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.304 "is_configured": false, 00:08:47.304 "data_offset": 2048, 00:08:47.304 "data_size": 63488 00:08:47.304 }, 00:08:47.304 { 00:08:47.304 "name": null, 00:08:47.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.304 "is_configured": false, 00:08:47.304 "data_offset": 2048, 00:08:47.304 "data_size": 63488 00:08:47.304 } 00:08:47.304 ] 00:08:47.304 }' 00:08:47.304 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.304 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.564 [2024-11-26 15:24:45.944962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.564 [2024-11-26 15:24:45.945034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.564 [2024-11-26 15:24:45.945060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:47.564 [2024-11-26 15:24:45.945068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.564 [2024-11-26 15:24:45.945509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.564 [2024-11-26 15:24:45.945539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.564 [2024-11-26 15:24:45.945616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:47.564 [2024-11-26 15:24:45.945642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.564 pt2 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.564 [2024-11-26 15:24:45.956994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.564 15:24:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.564 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.564 "name": "raid_bdev1", 00:08:47.564 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:47.564 "strip_size_kb": 64, 00:08:47.564 "state": "configuring", 00:08:47.564 "raid_level": "concat", 00:08:47.564 "superblock": true, 00:08:47.564 "num_base_bdevs": 3, 00:08:47.564 "num_base_bdevs_discovered": 1, 00:08:47.564 "num_base_bdevs_operational": 3, 00:08:47.564 "base_bdevs_list": [ 00:08:47.564 { 00:08:47.564 "name": "pt1", 00:08:47.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.564 "is_configured": true, 00:08:47.564 "data_offset": 2048, 00:08:47.564 "data_size": 63488 00:08:47.564 }, 00:08:47.564 { 00:08:47.564 "name": null, 00:08:47.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.564 "is_configured": false, 00:08:47.564 "data_offset": 0, 00:08:47.564 "data_size": 63488 00:08:47.564 }, 00:08:47.564 { 00:08:47.564 "name": null, 00:08:47.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.564 "is_configured": false, 00:08:47.564 "data_offset": 2048, 00:08:47.564 "data_size": 63488 00:08:47.564 } 00:08:47.564 ] 00:08:47.564 }' 00:08:47.564 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.564 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.135 [2024-11-26 15:24:46.341047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:48.135 [2024-11-26 15:24:46.341111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.135 [2024-11-26 15:24:46.341128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:48.135 [2024-11-26 15:24:46.341139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.135 [2024-11-26 15:24:46.341553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.135 [2024-11-26 15:24:46.341582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:48.135 [2024-11-26 15:24:46.341649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:48.135 [2024-11-26 15:24:46.341675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.135 pt2 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.135 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.135 [2024-11-26 15:24:46.353017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:48.135 [2024-11-26 15:24:46.353084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.135 [2024-11-26 15:24:46.353097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:48.135 [2024-11-26 15:24:46.353107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.135 [2024-11-26 15:24:46.353423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.135 [2024-11-26 15:24:46.353450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:48.135 [2024-11-26 15:24:46.353501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:48.135 [2024-11-26 15:24:46.353520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:48.135 [2024-11-26 15:24:46.353601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:48.135 [2024-11-26 15:24:46.353616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.135 [2024-11-26 15:24:46.353835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:48.135 [2024-11-26 15:24:46.353947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:48.135 [2024-11-26 15:24:46.353959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:48.135 [2024-11-26 15:24:46.354056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.136 pt3 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.136 "name": "raid_bdev1", 00:08:48.136 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:48.136 "strip_size_kb": 64, 00:08:48.136 "state": "online", 00:08:48.136 "raid_level": "concat", 00:08:48.136 "superblock": true, 00:08:48.136 "num_base_bdevs": 3, 00:08:48.136 "num_base_bdevs_discovered": 3, 00:08:48.136 "num_base_bdevs_operational": 3, 00:08:48.136 "base_bdevs_list": [ 00:08:48.136 { 00:08:48.136 "name": "pt1", 00:08:48.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.136 "is_configured": true, 00:08:48.136 "data_offset": 2048, 00:08:48.136 "data_size": 63488 00:08:48.136 }, 00:08:48.136 { 00:08:48.136 "name": "pt2", 00:08:48.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.136 "is_configured": true, 00:08:48.136 "data_offset": 2048, 00:08:48.136 "data_size": 63488 00:08:48.136 }, 00:08:48.136 { 00:08:48.136 "name": "pt3", 00:08:48.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.136 "is_configured": true, 00:08:48.136 "data_offset": 2048, 00:08:48.136 "data_size": 63488 00:08:48.136 } 00:08:48.136 ] 00:08:48.136 }' 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.136 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.395 [2024-11-26 15:24:46.773473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.395 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.395 "name": "raid_bdev1", 00:08:48.395 "aliases": [ 00:08:48.395 "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f" 00:08:48.395 ], 00:08:48.395 "product_name": "Raid Volume", 00:08:48.395 "block_size": 512, 00:08:48.395 "num_blocks": 190464, 00:08:48.395 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:48.395 "assigned_rate_limits": { 00:08:48.395 "rw_ios_per_sec": 0, 00:08:48.395 "rw_mbytes_per_sec": 0, 00:08:48.395 "r_mbytes_per_sec": 0, 00:08:48.395 "w_mbytes_per_sec": 0 00:08:48.395 }, 00:08:48.395 "claimed": false, 00:08:48.395 "zoned": false, 00:08:48.395 "supported_io_types": { 00:08:48.395 "read": true, 00:08:48.395 "write": true, 00:08:48.395 "unmap": true, 00:08:48.395 "flush": true, 00:08:48.395 "reset": true, 00:08:48.395 "nvme_admin": false, 00:08:48.395 "nvme_io": false, 00:08:48.395 "nvme_io_md": false, 00:08:48.395 "write_zeroes": true, 00:08:48.395 "zcopy": false, 00:08:48.395 "get_zone_info": false, 00:08:48.395 "zone_management": false, 00:08:48.395 "zone_append": false, 00:08:48.395 "compare": false, 00:08:48.396 "compare_and_write": false, 00:08:48.396 "abort": false, 00:08:48.396 "seek_hole": false, 00:08:48.396 "seek_data": false, 00:08:48.396 "copy": false, 00:08:48.396 "nvme_iov_md": false 00:08:48.396 }, 00:08:48.396 "memory_domains": [ 00:08:48.396 { 00:08:48.396 "dma_device_id": "system", 00:08:48.396 "dma_device_type": 1 00:08:48.396 }, 00:08:48.396 { 00:08:48.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.396 "dma_device_type": 2 00:08:48.396 }, 00:08:48.396 { 00:08:48.396 "dma_device_id": "system", 00:08:48.396 "dma_device_type": 1 00:08:48.396 }, 00:08:48.396 { 00:08:48.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.396 "dma_device_type": 2 00:08:48.396 }, 00:08:48.396 { 00:08:48.396 "dma_device_id": "system", 00:08:48.396 "dma_device_type": 1 00:08:48.396 }, 00:08:48.396 { 00:08:48.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.396 "dma_device_type": 2 00:08:48.396 } 00:08:48.396 ], 00:08:48.396 "driver_specific": { 00:08:48.396 "raid": { 00:08:48.396 "uuid": "ebc2fac8-6692-4316-b9b9-0cfefbc10c1f", 00:08:48.396 "strip_size_kb": 64, 00:08:48.396 "state": "online", 00:08:48.396 "raid_level": "concat", 00:08:48.396 "superblock": true, 00:08:48.396 "num_base_bdevs": 3, 00:08:48.396 "num_base_bdevs_discovered": 3, 00:08:48.396 "num_base_bdevs_operational": 3, 00:08:48.396 "base_bdevs_list": [ 00:08:48.396 { 00:08:48.396 "name": "pt1", 00:08:48.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.396 "is_configured": true, 00:08:48.396 "data_offset": 2048, 00:08:48.396 "data_size": 63488 00:08:48.396 }, 00:08:48.396 { 00:08:48.396 "name": "pt2", 00:08:48.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.396 "is_configured": true, 00:08:48.396 "data_offset": 2048, 00:08:48.396 "data_size": 63488 00:08:48.396 }, 00:08:48.396 { 00:08:48.396 "name": "pt3", 00:08:48.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.396 "is_configured": true, 00:08:48.396 "data_offset": 2048, 00:08:48.396 "data_size": 63488 00:08:48.396 } 00:08:48.396 ] 00:08:48.396 } 00:08:48.396 } 00:08:48.396 }' 00:08:48.396 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.396 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:48.396 pt2 00:08:48.396 pt3' 00:08:48.396 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.657 15:24:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.657 [2024-11-26 15:24:47.049479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ebc2fac8-6692-4316-b9b9-0cfefbc10c1f '!=' ebc2fac8-6692-4316-b9b9-0cfefbc10c1f ']' 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79542 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79542 ']' 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79542 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79542 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.657 killing process with pid 79542 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79542' 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79542 00:08:48.657 [2024-11-26 15:24:47.125958] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.657 [2024-11-26 15:24:47.126046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.657 [2024-11-26 15:24:47.126105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.657 [2024-11-26 15:24:47.126118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:48.657 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79542 00:08:48.917 [2024-11-26 15:24:47.158145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.917 ************************************ 00:08:48.917 END TEST raid_superblock_test 00:08:48.917 ************************************ 00:08:48.917 15:24:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:48.917 00:08:48.917 real 0m3.797s 00:08:48.917 user 0m5.987s 00:08:48.917 sys 0m0.830s 00:08:48.917 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.917 15:24:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.176 15:24:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:49.177 15:24:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:49.177 15:24:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.177 15:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.177 ************************************ 00:08:49.177 START TEST raid_read_error_test 00:08:49.177 ************************************ 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bEUVCKSGUu 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79773 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79773 00:08:49.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 79773 ']' 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.177 15:24:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.177 [2024-11-26 15:24:47.544936] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:49.177 [2024-11-26 15:24:47.545050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79773 ] 00:08:49.437 [2024-11-26 15:24:47.678924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:49.437 [2024-11-26 15:24:47.718038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.437 [2024-11-26 15:24:47.745050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.437 [2024-11-26 15:24:47.788372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.437 [2024-11-26 15:24:47.788405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.006 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.006 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 BaseBdev1_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 true 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 [2024-11-26 15:24:48.380467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:50.007 [2024-11-26 15:24:48.380522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.007 [2024-11-26 15:24:48.380548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:50.007 [2024-11-26 15:24:48.380582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.007 [2024-11-26 15:24:48.382717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.007 [2024-11-26 15:24:48.382795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:50.007 BaseBdev1 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 BaseBdev2_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 true 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 [2024-11-26 15:24:48.421182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:50.007 [2024-11-26 15:24:48.421240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.007 [2024-11-26 15:24:48.421256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:50.007 [2024-11-26 15:24:48.421266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.007 [2024-11-26 15:24:48.423266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.007 [2024-11-26 15:24:48.423301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:50.007 BaseBdev2 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 BaseBdev3_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 true 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 [2024-11-26 15:24:48.461892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:50.007 [2024-11-26 15:24:48.461942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.007 [2024-11-26 15:24:48.461958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:50.007 [2024-11-26 15:24:48.461967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.007 [2024-11-26 15:24:48.464044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.007 [2024-11-26 15:24:48.464082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:50.007 BaseBdev3 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.007 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.007 [2024-11-26 15:24:48.473943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.007 [2024-11-26 15:24:48.475744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.007 [2024-11-26 15:24:48.475812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.007 [2024-11-26 15:24:48.475979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.007 [2024-11-26 15:24:48.475991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.007 [2024-11-26 15:24:48.476261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:08:50.007 [2024-11-26 15:24:48.476391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.007 [2024-11-26 15:24:48.476403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:50.007 [2024-11-26 15:24:48.476509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.267 "name": "raid_bdev1", 00:08:50.267 "uuid": "7ca6b3be-ef19-4422-bec7-f987ea440336", 00:08:50.267 "strip_size_kb": 64, 00:08:50.267 "state": "online", 00:08:50.267 "raid_level": "concat", 00:08:50.267 "superblock": true, 00:08:50.267 "num_base_bdevs": 3, 00:08:50.267 "num_base_bdevs_discovered": 3, 00:08:50.267 "num_base_bdevs_operational": 3, 00:08:50.267 "base_bdevs_list": [ 00:08:50.267 { 00:08:50.267 "name": "BaseBdev1", 00:08:50.267 "uuid": "6e3d7fe1-c162-5d6d-8655-8d1e0b406bac", 00:08:50.267 "is_configured": true, 00:08:50.267 "data_offset": 2048, 00:08:50.267 "data_size": 63488 00:08:50.267 }, 00:08:50.267 { 00:08:50.267 "name": "BaseBdev2", 00:08:50.267 "uuid": "25b1bbc1-4255-5f82-8312-9a6998d31ea8", 00:08:50.267 "is_configured": true, 00:08:50.267 "data_offset": 2048, 00:08:50.267 "data_size": 63488 00:08:50.267 }, 00:08:50.267 { 00:08:50.267 "name": "BaseBdev3", 00:08:50.267 "uuid": "b1f518c1-4965-548f-9d9d-c6bdf01661e7", 00:08:50.267 "is_configured": true, 00:08:50.267 "data_offset": 2048, 00:08:50.267 "data_size": 63488 00:08:50.267 } 00:08:50.267 ] 00:08:50.267 }' 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.267 15:24:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.527 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.527 15:24:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.787 [2024-11-26 15:24:49.026473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 15:24:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.726 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.726 "name": "raid_bdev1", 00:08:51.726 "uuid": "7ca6b3be-ef19-4422-bec7-f987ea440336", 00:08:51.726 "strip_size_kb": 64, 00:08:51.727 "state": "online", 00:08:51.727 "raid_level": "concat", 00:08:51.727 "superblock": true, 00:08:51.727 "num_base_bdevs": 3, 00:08:51.727 "num_base_bdevs_discovered": 3, 00:08:51.727 "num_base_bdevs_operational": 3, 00:08:51.727 "base_bdevs_list": [ 00:08:51.727 { 00:08:51.727 "name": "BaseBdev1", 00:08:51.727 "uuid": "6e3d7fe1-c162-5d6d-8655-8d1e0b406bac", 00:08:51.727 "is_configured": true, 00:08:51.727 "data_offset": 2048, 00:08:51.727 "data_size": 63488 00:08:51.727 }, 00:08:51.727 { 00:08:51.727 "name": "BaseBdev2", 00:08:51.727 "uuid": "25b1bbc1-4255-5f82-8312-9a6998d31ea8", 00:08:51.727 "is_configured": true, 00:08:51.727 "data_offset": 2048, 00:08:51.727 "data_size": 63488 00:08:51.727 }, 00:08:51.727 { 00:08:51.727 "name": "BaseBdev3", 00:08:51.727 "uuid": "b1f518c1-4965-548f-9d9d-c6bdf01661e7", 00:08:51.727 "is_configured": true, 00:08:51.727 "data_offset": 2048, 00:08:51.727 "data_size": 63488 00:08:51.727 } 00:08:51.727 ] 00:08:51.727 }' 00:08:51.727 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.727 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.986 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.986 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.986 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.986 [2024-11-26 15:24:50.348521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.986 [2024-11-26 15:24:50.348561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.986 [2024-11-26 15:24:50.351052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.986 [2024-11-26 15:24:50.351129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.986 [2024-11-26 15:24:50.351219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.986 [2024-11-26 15:24:50.351267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:51.986 { 00:08:51.986 "results": [ 00:08:51.986 { 00:08:51.987 "job": "raid_bdev1", 00:08:51.987 "core_mask": "0x1", 00:08:51.987 "workload": "randrw", 00:08:51.987 "percentage": 50, 00:08:51.987 "status": "finished", 00:08:51.987 "queue_depth": 1, 00:08:51.987 "io_size": 131072, 00:08:51.987 "runtime": 1.320136, 00:08:51.987 "iops": 17005.8236424126, 00:08:51.987 "mibps": 2125.727955301575, 00:08:51.987 "io_failed": 1, 00:08:51.987 "io_timeout": 0, 00:08:51.987 "avg_latency_us": 81.58097200615748, 00:08:51.987 "min_latency_us": 24.20988407565589, 00:08:51.987 "max_latency_us": 1370.9265231412883 00:08:51.987 } 00:08:51.987 ], 00:08:51.987 "core_count": 1 00:08:51.987 } 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79773 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 79773 ']' 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 79773 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79773 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79773' 00:08:51.987 killing process with pid 79773 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 79773 00:08:51.987 [2024-11-26 15:24:50.388147] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.987 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 79773 00:08:51.987 [2024-11-26 15:24:50.412420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.246 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bEUVCKSGUu 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:52.247 00:08:52.247 real 0m3.185s 00:08:52.247 user 0m4.046s 00:08:52.247 sys 0m0.472s 00:08:52.247 ************************************ 00:08:52.247 END TEST raid_read_error_test 00:08:52.247 ************************************ 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.247 15:24:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.247 15:24:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:52.247 15:24:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.247 15:24:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.247 15:24:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.247 ************************************ 00:08:52.247 START TEST raid_write_error_test 00:08:52.247 ************************************ 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:52.247 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fsArti9dpj 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79907 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79907 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 79907 ']' 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.513 15:24:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.513 [2024-11-26 15:24:50.803501] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:52.513 [2024-11-26 15:24:50.803629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79907 ] 00:08:52.513 [2024-11-26 15:24:50.936545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:52.513 [2024-11-26 15:24:50.965059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.780 [2024-11-26 15:24:50.993600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.780 [2024-11-26 15:24:51.036818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.780 [2024-11-26 15:24:51.036863] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 BaseBdev1_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 true 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 [2024-11-26 15:24:51.652903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:53.350 [2024-11-26 15:24:51.653037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.350 [2024-11-26 15:24:51.653074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:53.350 [2024-11-26 15:24:51.653096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.350 [2024-11-26 15:24:51.655207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.350 [2024-11-26 15:24:51.655240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:53.350 BaseBdev1 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 BaseBdev2_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 true 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 [2024-11-26 15:24:51.693568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.350 [2024-11-26 15:24:51.693616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.350 [2024-11-26 15:24:51.693631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:53.350 [2024-11-26 15:24:51.693641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.350 [2024-11-26 15:24:51.695613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.350 [2024-11-26 15:24:51.695702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.350 BaseBdev2 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 BaseBdev3_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 true 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.350 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.350 [2024-11-26 15:24:51.734104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:53.350 [2024-11-26 15:24:51.734154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.350 [2024-11-26 15:24:51.734170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:53.350 [2024-11-26 15:24:51.734198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.351 [2024-11-26 15:24:51.736302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.351 [2024-11-26 15:24:51.736337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:53.351 BaseBdev3 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 [2024-11-26 15:24:51.746155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.351 [2024-11-26 15:24:51.748057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.351 [2024-11-26 15:24:51.748128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.351 [2024-11-26 15:24:51.748317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.351 [2024-11-26 15:24:51.748330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.351 [2024-11-26 15:24:51.748629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:08:53.351 [2024-11-26 15:24:51.748779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.351 [2024-11-26 15:24:51.748792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:53.351 [2024-11-26 15:24:51.748917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.351 "name": "raid_bdev1", 00:08:53.351 "uuid": "49eb6519-fdeb-4cda-9748-9713b42f91ee", 00:08:53.351 "strip_size_kb": 64, 00:08:53.351 "state": "online", 00:08:53.351 "raid_level": "concat", 00:08:53.351 "superblock": true, 00:08:53.351 "num_base_bdevs": 3, 00:08:53.351 "num_base_bdevs_discovered": 3, 00:08:53.351 "num_base_bdevs_operational": 3, 00:08:53.351 "base_bdevs_list": [ 00:08:53.351 { 00:08:53.351 "name": "BaseBdev1", 00:08:53.351 "uuid": "4c9b9f83-985a-519a-b5b5-c3b3dcba7a0e", 00:08:53.351 "is_configured": true, 00:08:53.351 "data_offset": 2048, 00:08:53.351 "data_size": 63488 00:08:53.351 }, 00:08:53.351 { 00:08:53.351 "name": "BaseBdev2", 00:08:53.351 "uuid": "5b4fc2b9-4533-5017-83b5-0d6bccbd52d0", 00:08:53.351 "is_configured": true, 00:08:53.351 "data_offset": 2048, 00:08:53.351 "data_size": 63488 00:08:53.351 }, 00:08:53.351 { 00:08:53.351 "name": "BaseBdev3", 00:08:53.351 "uuid": "b1a572a0-9809-5ec6-a4f7-ab6a05739eb6", 00:08:53.351 "is_configured": true, 00:08:53.351 "data_offset": 2048, 00:08:53.351 "data_size": 63488 00:08:53.351 } 00:08:53.351 ] 00:08:53.351 }' 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.351 15:24:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.921 15:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:53.921 15:24:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.921 [2024-11-26 15:24:52.274662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.861 "name": "raid_bdev1", 00:08:54.861 "uuid": "49eb6519-fdeb-4cda-9748-9713b42f91ee", 00:08:54.861 "strip_size_kb": 64, 00:08:54.861 "state": "online", 00:08:54.861 "raid_level": "concat", 00:08:54.861 "superblock": true, 00:08:54.861 "num_base_bdevs": 3, 00:08:54.861 "num_base_bdevs_discovered": 3, 00:08:54.861 "num_base_bdevs_operational": 3, 00:08:54.861 "base_bdevs_list": [ 00:08:54.861 { 00:08:54.861 "name": "BaseBdev1", 00:08:54.861 "uuid": "4c9b9f83-985a-519a-b5b5-c3b3dcba7a0e", 00:08:54.861 "is_configured": true, 00:08:54.861 "data_offset": 2048, 00:08:54.861 "data_size": 63488 00:08:54.861 }, 00:08:54.861 { 00:08:54.861 "name": "BaseBdev2", 00:08:54.861 "uuid": "5b4fc2b9-4533-5017-83b5-0d6bccbd52d0", 00:08:54.861 "is_configured": true, 00:08:54.861 "data_offset": 2048, 00:08:54.861 "data_size": 63488 00:08:54.861 }, 00:08:54.861 { 00:08:54.861 "name": "BaseBdev3", 00:08:54.861 "uuid": "b1a572a0-9809-5ec6-a4f7-ab6a05739eb6", 00:08:54.861 "is_configured": true, 00:08:54.861 "data_offset": 2048, 00:08:54.861 "data_size": 63488 00:08:54.861 } 00:08:54.861 ] 00:08:54.861 }' 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.861 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.432 [2024-11-26 15:24:53.649097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.432 [2024-11-26 15:24:53.649216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.432 [2024-11-26 15:24:53.651745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.432 [2024-11-26 15:24:53.651835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.432 [2024-11-26 15:24:53.651894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.432 [2024-11-26 15:24:53.651936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:55.432 { 00:08:55.432 "results": [ 00:08:55.432 { 00:08:55.432 "job": "raid_bdev1", 00:08:55.432 "core_mask": "0x1", 00:08:55.432 "workload": "randrw", 00:08:55.432 "percentage": 50, 00:08:55.432 "status": "finished", 00:08:55.432 "queue_depth": 1, 00:08:55.432 "io_size": 131072, 00:08:55.432 "runtime": 1.372667, 00:08:55.432 "iops": 17343.609192906948, 00:08:55.432 "mibps": 2167.9511491133685, 00:08:55.432 "io_failed": 1, 00:08:55.432 "io_timeout": 0, 00:08:55.432 "avg_latency_us": 79.95150606051233, 00:08:55.432 "min_latency_us": 24.54458293384468, 00:08:55.432 "max_latency_us": 1428.0484616055087 00:08:55.432 } 00:08:55.432 ], 00:08:55.432 "core_count": 1 00:08:55.432 } 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79907 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 79907 ']' 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 79907 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79907 00:08:55.432 killing process with pid 79907 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79907' 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 79907 00:08:55.432 [2024-11-26 15:24:53.698739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.432 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 79907 00:08:55.432 [2024-11-26 15:24:53.723708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fsArti9dpj 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:55.692 ************************************ 00:08:55.692 END TEST raid_write_error_test 00:08:55.692 ************************************ 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:55.692 00:08:55.692 real 0m3.239s 00:08:55.692 user 0m4.086s 00:08:55.692 sys 0m0.514s 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.692 15:24:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.692 15:24:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:55.692 15:24:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:55.692 15:24:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:55.692 15:24:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.692 15:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 ************************************ 00:08:55.693 START TEST raid_state_function_test 00:08:55.693 ************************************ 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:55.693 Process raid pid: 80035 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80035 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80035' 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80035 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80035 ']' 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.693 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 [2024-11-26 15:24:54.113110] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:55.693 [2024-11-26 15:24:54.113325] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.954 [2024-11-26 15:24:54.248438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:55.954 [2024-11-26 15:24:54.284812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.954 [2024-11-26 15:24:54.311289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.954 [2024-11-26 15:24:54.353854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.954 [2024-11-26 15:24:54.353970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.525 [2024-11-26 15:24:54.936733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.525 [2024-11-26 15:24:54.936789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.525 [2024-11-26 15:24:54.936809] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.525 [2024-11-26 15:24:54.936818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.525 [2024-11-26 15:24:54.936827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.525 [2024-11-26 15:24:54.936834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.525 "name": "Existed_Raid", 00:08:56.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.525 "strip_size_kb": 0, 00:08:56.525 "state": "configuring", 00:08:56.525 "raid_level": "raid1", 00:08:56.525 "superblock": false, 00:08:56.525 "num_base_bdevs": 3, 00:08:56.525 "num_base_bdevs_discovered": 0, 00:08:56.525 "num_base_bdevs_operational": 3, 00:08:56.525 "base_bdevs_list": [ 00:08:56.525 { 00:08:56.525 "name": "BaseBdev1", 00:08:56.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.525 "is_configured": false, 00:08:56.525 "data_offset": 0, 00:08:56.525 "data_size": 0 00:08:56.525 }, 00:08:56.525 { 00:08:56.525 "name": "BaseBdev2", 00:08:56.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.525 "is_configured": false, 00:08:56.525 "data_offset": 0, 00:08:56.525 "data_size": 0 00:08:56.525 }, 00:08:56.525 { 00:08:56.525 "name": "BaseBdev3", 00:08:56.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.525 "is_configured": false, 00:08:56.525 "data_offset": 0, 00:08:56.525 "data_size": 0 00:08:56.525 } 00:08:56.525 ] 00:08:56.525 }' 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.525 15:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.096 [2024-11-26 15:24:55.424801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.096 [2024-11-26 15:24:55.424890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.096 [2024-11-26 15:24:55.436808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.096 [2024-11-26 15:24:55.436884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.096 [2024-11-26 15:24:55.436913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.096 [2024-11-26 15:24:55.436933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.096 [2024-11-26 15:24:55.436952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.096 [2024-11-26 15:24:55.436970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.096 [2024-11-26 15:24:55.457743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.096 BaseBdev1 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.096 [ 00:08:57.096 { 00:08:57.096 "name": "BaseBdev1", 00:08:57.096 "aliases": [ 00:08:57.096 "29854368-0164-4c76-b3cc-4d643d7ccfbe" 00:08:57.096 ], 00:08:57.096 "product_name": "Malloc disk", 00:08:57.096 "block_size": 512, 00:08:57.096 "num_blocks": 65536, 00:08:57.096 "uuid": "29854368-0164-4c76-b3cc-4d643d7ccfbe", 00:08:57.096 "assigned_rate_limits": { 00:08:57.096 "rw_ios_per_sec": 0, 00:08:57.096 "rw_mbytes_per_sec": 0, 00:08:57.096 "r_mbytes_per_sec": 0, 00:08:57.096 "w_mbytes_per_sec": 0 00:08:57.096 }, 00:08:57.096 "claimed": true, 00:08:57.096 "claim_type": "exclusive_write", 00:08:57.096 "zoned": false, 00:08:57.096 "supported_io_types": { 00:08:57.096 "read": true, 00:08:57.096 "write": true, 00:08:57.096 "unmap": true, 00:08:57.096 "flush": true, 00:08:57.096 "reset": true, 00:08:57.096 "nvme_admin": false, 00:08:57.096 "nvme_io": false, 00:08:57.096 "nvme_io_md": false, 00:08:57.096 "write_zeroes": true, 00:08:57.096 "zcopy": true, 00:08:57.096 "get_zone_info": false, 00:08:57.096 "zone_management": false, 00:08:57.096 "zone_append": false, 00:08:57.096 "compare": false, 00:08:57.096 "compare_and_write": false, 00:08:57.096 "abort": true, 00:08:57.096 "seek_hole": false, 00:08:57.096 "seek_data": false, 00:08:57.096 "copy": true, 00:08:57.096 "nvme_iov_md": false 00:08:57.096 }, 00:08:57.096 "memory_domains": [ 00:08:57.096 { 00:08:57.096 "dma_device_id": "system", 00:08:57.096 "dma_device_type": 1 00:08:57.096 }, 00:08:57.096 { 00:08:57.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.096 "dma_device_type": 2 00:08:57.096 } 00:08:57.096 ], 00:08:57.096 "driver_specific": {} 00:08:57.096 } 00:08:57.096 ] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.096 "name": "Existed_Raid", 00:08:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.096 "strip_size_kb": 0, 00:08:57.096 "state": "configuring", 00:08:57.096 "raid_level": "raid1", 00:08:57.096 "superblock": false, 00:08:57.096 "num_base_bdevs": 3, 00:08:57.096 "num_base_bdevs_discovered": 1, 00:08:57.096 "num_base_bdevs_operational": 3, 00:08:57.096 "base_bdevs_list": [ 00:08:57.096 { 00:08:57.096 "name": "BaseBdev1", 00:08:57.096 "uuid": "29854368-0164-4c76-b3cc-4d643d7ccfbe", 00:08:57.096 "is_configured": true, 00:08:57.096 "data_offset": 0, 00:08:57.096 "data_size": 65536 00:08:57.096 }, 00:08:57.096 { 00:08:57.096 "name": "BaseBdev2", 00:08:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.096 "is_configured": false, 00:08:57.096 "data_offset": 0, 00:08:57.096 "data_size": 0 00:08:57.096 }, 00:08:57.096 { 00:08:57.096 "name": "BaseBdev3", 00:08:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.096 "is_configured": false, 00:08:57.096 "data_offset": 0, 00:08:57.096 "data_size": 0 00:08:57.096 } 00:08:57.096 ] 00:08:57.096 }' 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.096 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.667 [2024-11-26 15:24:55.913911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.667 [2024-11-26 15:24:55.913971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.667 [2024-11-26 15:24:55.925944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.667 [2024-11-26 15:24:55.927805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.667 [2024-11-26 15:24:55.927890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.667 [2024-11-26 15:24:55.927923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.667 [2024-11-26 15:24:55.927943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.667 "name": "Existed_Raid", 00:08:57.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.667 "strip_size_kb": 0, 00:08:57.667 "state": "configuring", 00:08:57.667 "raid_level": "raid1", 00:08:57.667 "superblock": false, 00:08:57.667 "num_base_bdevs": 3, 00:08:57.667 "num_base_bdevs_discovered": 1, 00:08:57.667 "num_base_bdevs_operational": 3, 00:08:57.667 "base_bdevs_list": [ 00:08:57.667 { 00:08:57.667 "name": "BaseBdev1", 00:08:57.667 "uuid": "29854368-0164-4c76-b3cc-4d643d7ccfbe", 00:08:57.667 "is_configured": true, 00:08:57.667 "data_offset": 0, 00:08:57.667 "data_size": 65536 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "name": "BaseBdev2", 00:08:57.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.667 "is_configured": false, 00:08:57.667 "data_offset": 0, 00:08:57.667 "data_size": 0 00:08:57.667 }, 00:08:57.667 { 00:08:57.667 "name": "BaseBdev3", 00:08:57.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.667 "is_configured": false, 00:08:57.667 "data_offset": 0, 00:08:57.667 "data_size": 0 00:08:57.667 } 00:08:57.667 ] 00:08:57.667 }' 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.667 15:24:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 BaseBdev2 00:08:57.928 [2024-11-26 15:24:56.341257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.928 [ 00:08:57.928 { 00:08:57.928 "name": "BaseBdev2", 00:08:57.928 "aliases": [ 00:08:57.928 "6c520d24-5990-414b-a36c-853fbe783368" 00:08:57.928 ], 00:08:57.928 "product_name": "Malloc disk", 00:08:57.928 "block_size": 512, 00:08:57.928 "num_blocks": 65536, 00:08:57.928 "uuid": "6c520d24-5990-414b-a36c-853fbe783368", 00:08:57.928 "assigned_rate_limits": { 00:08:57.928 "rw_ios_per_sec": 0, 00:08:57.928 "rw_mbytes_per_sec": 0, 00:08:57.928 "r_mbytes_per_sec": 0, 00:08:57.928 "w_mbytes_per_sec": 0 00:08:57.928 }, 00:08:57.928 "claimed": true, 00:08:57.928 "claim_type": "exclusive_write", 00:08:57.928 "zoned": false, 00:08:57.928 "supported_io_types": { 00:08:57.928 "read": true, 00:08:57.928 "write": true, 00:08:57.928 "unmap": true, 00:08:57.928 "flush": true, 00:08:57.928 "reset": true, 00:08:57.928 "nvme_admin": false, 00:08:57.928 "nvme_io": false, 00:08:57.928 "nvme_io_md": false, 00:08:57.928 "write_zeroes": true, 00:08:57.928 "zcopy": true, 00:08:57.928 "get_zone_info": false, 00:08:57.928 "zone_management": false, 00:08:57.928 "zone_append": false, 00:08:57.928 "compare": false, 00:08:57.928 "compare_and_write": false, 00:08:57.928 "abort": true, 00:08:57.928 "seek_hole": false, 00:08:57.928 "seek_data": false, 00:08:57.928 "copy": true, 00:08:57.928 "nvme_iov_md": false 00:08:57.928 }, 00:08:57.928 "memory_domains": [ 00:08:57.928 { 00:08:57.928 "dma_device_id": "system", 00:08:57.928 "dma_device_type": 1 00:08:57.928 }, 00:08:57.928 { 00:08:57.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.928 "dma_device_type": 2 00:08:57.928 } 00:08:57.928 ], 00:08:57.928 "driver_specific": {} 00:08:57.928 } 00:08:57.928 ] 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.928 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.188 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.188 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.188 "name": "Existed_Raid", 00:08:58.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.188 "strip_size_kb": 0, 00:08:58.188 "state": "configuring", 00:08:58.188 "raid_level": "raid1", 00:08:58.188 "superblock": false, 00:08:58.188 "num_base_bdevs": 3, 00:08:58.188 "num_base_bdevs_discovered": 2, 00:08:58.188 "num_base_bdevs_operational": 3, 00:08:58.188 "base_bdevs_list": [ 00:08:58.188 { 00:08:58.188 "name": "BaseBdev1", 00:08:58.188 "uuid": "29854368-0164-4c76-b3cc-4d643d7ccfbe", 00:08:58.188 "is_configured": true, 00:08:58.188 "data_offset": 0, 00:08:58.188 "data_size": 65536 00:08:58.188 }, 00:08:58.188 { 00:08:58.188 "name": "BaseBdev2", 00:08:58.188 "uuid": "6c520d24-5990-414b-a36c-853fbe783368", 00:08:58.188 "is_configured": true, 00:08:58.188 "data_offset": 0, 00:08:58.188 "data_size": 65536 00:08:58.188 }, 00:08:58.188 { 00:08:58.188 "name": "BaseBdev3", 00:08:58.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.188 "is_configured": false, 00:08:58.188 "data_offset": 0, 00:08:58.188 "data_size": 0 00:08:58.188 } 00:08:58.188 ] 00:08:58.188 }' 00:08:58.188 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.188 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.448 [2024-11-26 15:24:56.825665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.448 [2024-11-26 15:24:56.825724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:58.448 [2024-11-26 15:24:56.825736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:58.448 [2024-11-26 15:24:56.826110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:58.448 [2024-11-26 15:24:56.826300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:58.448 [2024-11-26 15:24:56.826323] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:58.448 [2024-11-26 15:24:56.826552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.448 BaseBdev3 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.448 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.449 [ 00:08:58.449 { 00:08:58.449 "name": "BaseBdev3", 00:08:58.449 "aliases": [ 00:08:58.449 "3b42cb34-ec87-43ad-a0a7-8960e67f3533" 00:08:58.449 ], 00:08:58.449 "product_name": "Malloc disk", 00:08:58.449 "block_size": 512, 00:08:58.449 "num_blocks": 65536, 00:08:58.449 "uuid": "3b42cb34-ec87-43ad-a0a7-8960e67f3533", 00:08:58.449 "assigned_rate_limits": { 00:08:58.449 "rw_ios_per_sec": 0, 00:08:58.449 "rw_mbytes_per_sec": 0, 00:08:58.449 "r_mbytes_per_sec": 0, 00:08:58.449 "w_mbytes_per_sec": 0 00:08:58.449 }, 00:08:58.449 "claimed": true, 00:08:58.449 "claim_type": "exclusive_write", 00:08:58.449 "zoned": false, 00:08:58.449 "supported_io_types": { 00:08:58.449 "read": true, 00:08:58.449 "write": true, 00:08:58.449 "unmap": true, 00:08:58.449 "flush": true, 00:08:58.449 "reset": true, 00:08:58.449 "nvme_admin": false, 00:08:58.449 "nvme_io": false, 00:08:58.449 "nvme_io_md": false, 00:08:58.449 "write_zeroes": true, 00:08:58.449 "zcopy": true, 00:08:58.449 "get_zone_info": false, 00:08:58.449 "zone_management": false, 00:08:58.449 "zone_append": false, 00:08:58.449 "compare": false, 00:08:58.449 "compare_and_write": false, 00:08:58.449 "abort": true, 00:08:58.449 "seek_hole": false, 00:08:58.449 "seek_data": false, 00:08:58.449 "copy": true, 00:08:58.449 "nvme_iov_md": false 00:08:58.449 }, 00:08:58.449 "memory_domains": [ 00:08:58.449 { 00:08:58.449 "dma_device_id": "system", 00:08:58.449 "dma_device_type": 1 00:08:58.449 }, 00:08:58.449 { 00:08:58.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.449 "dma_device_type": 2 00:08:58.449 } 00:08:58.449 ], 00:08:58.449 "driver_specific": {} 00:08:58.449 } 00:08:58.449 ] 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.449 "name": "Existed_Raid", 00:08:58.449 "uuid": "6d8f5455-cad4-4b67-979a-143f0715b353", 00:08:58.449 "strip_size_kb": 0, 00:08:58.449 "state": "online", 00:08:58.449 "raid_level": "raid1", 00:08:58.449 "superblock": false, 00:08:58.449 "num_base_bdevs": 3, 00:08:58.449 "num_base_bdevs_discovered": 3, 00:08:58.449 "num_base_bdevs_operational": 3, 00:08:58.449 "base_bdevs_list": [ 00:08:58.449 { 00:08:58.449 "name": "BaseBdev1", 00:08:58.449 "uuid": "29854368-0164-4c76-b3cc-4d643d7ccfbe", 00:08:58.449 "is_configured": true, 00:08:58.449 "data_offset": 0, 00:08:58.449 "data_size": 65536 00:08:58.449 }, 00:08:58.449 { 00:08:58.449 "name": "BaseBdev2", 00:08:58.449 "uuid": "6c520d24-5990-414b-a36c-853fbe783368", 00:08:58.449 "is_configured": true, 00:08:58.449 "data_offset": 0, 00:08:58.449 "data_size": 65536 00:08:58.449 }, 00:08:58.449 { 00:08:58.449 "name": "BaseBdev3", 00:08:58.449 "uuid": "3b42cb34-ec87-43ad-a0a7-8960e67f3533", 00:08:58.449 "is_configured": true, 00:08:58.449 "data_offset": 0, 00:08:58.449 "data_size": 65536 00:08:58.449 } 00:08:58.449 ] 00:08:58.449 }' 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.449 15:24:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.020 [2024-11-26 15:24:57.298127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.020 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.020 "name": "Existed_Raid", 00:08:59.020 "aliases": [ 00:08:59.020 "6d8f5455-cad4-4b67-979a-143f0715b353" 00:08:59.020 ], 00:08:59.020 "product_name": "Raid Volume", 00:08:59.020 "block_size": 512, 00:08:59.020 "num_blocks": 65536, 00:08:59.020 "uuid": "6d8f5455-cad4-4b67-979a-143f0715b353", 00:08:59.020 "assigned_rate_limits": { 00:08:59.020 "rw_ios_per_sec": 0, 00:08:59.020 "rw_mbytes_per_sec": 0, 00:08:59.020 "r_mbytes_per_sec": 0, 00:08:59.020 "w_mbytes_per_sec": 0 00:08:59.020 }, 00:08:59.020 "claimed": false, 00:08:59.020 "zoned": false, 00:08:59.020 "supported_io_types": { 00:08:59.020 "read": true, 00:08:59.020 "write": true, 00:08:59.020 "unmap": false, 00:08:59.020 "flush": false, 00:08:59.020 "reset": true, 00:08:59.020 "nvme_admin": false, 00:08:59.020 "nvme_io": false, 00:08:59.020 "nvme_io_md": false, 00:08:59.020 "write_zeroes": true, 00:08:59.020 "zcopy": false, 00:08:59.020 "get_zone_info": false, 00:08:59.020 "zone_management": false, 00:08:59.020 "zone_append": false, 00:08:59.020 "compare": false, 00:08:59.020 "compare_and_write": false, 00:08:59.020 "abort": false, 00:08:59.020 "seek_hole": false, 00:08:59.020 "seek_data": false, 00:08:59.020 "copy": false, 00:08:59.020 "nvme_iov_md": false 00:08:59.020 }, 00:08:59.020 "memory_domains": [ 00:08:59.021 { 00:08:59.021 "dma_device_id": "system", 00:08:59.021 "dma_device_type": 1 00:08:59.021 }, 00:08:59.021 { 00:08:59.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.021 "dma_device_type": 2 00:08:59.021 }, 00:08:59.021 { 00:08:59.021 "dma_device_id": "system", 00:08:59.021 "dma_device_type": 1 00:08:59.021 }, 00:08:59.021 { 00:08:59.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.021 "dma_device_type": 2 00:08:59.021 }, 00:08:59.021 { 00:08:59.021 "dma_device_id": "system", 00:08:59.021 "dma_device_type": 1 00:08:59.021 }, 00:08:59.021 { 00:08:59.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.021 "dma_device_type": 2 00:08:59.021 } 00:08:59.021 ], 00:08:59.021 "driver_specific": { 00:08:59.021 "raid": { 00:08:59.021 "uuid": "6d8f5455-cad4-4b67-979a-143f0715b353", 00:08:59.021 "strip_size_kb": 0, 00:08:59.021 "state": "online", 00:08:59.021 "raid_level": "raid1", 00:08:59.021 "superblock": false, 00:08:59.021 "num_base_bdevs": 3, 00:08:59.021 "num_base_bdevs_discovered": 3, 00:08:59.021 "num_base_bdevs_operational": 3, 00:08:59.021 "base_bdevs_list": [ 00:08:59.021 { 00:08:59.021 "name": "BaseBdev1", 00:08:59.021 "uuid": "29854368-0164-4c76-b3cc-4d643d7ccfbe", 00:08:59.021 "is_configured": true, 00:08:59.021 "data_offset": 0, 00:08:59.021 "data_size": 65536 00:08:59.021 }, 00:08:59.021 { 00:08:59.021 "name": "BaseBdev2", 00:08:59.021 "uuid": "6c520d24-5990-414b-a36c-853fbe783368", 00:08:59.021 "is_configured": true, 00:08:59.021 "data_offset": 0, 00:08:59.021 "data_size": 65536 00:08:59.021 }, 00:08:59.021 { 00:08:59.021 "name": "BaseBdev3", 00:08:59.021 "uuid": "3b42cb34-ec87-43ad-a0a7-8960e67f3533", 00:08:59.021 "is_configured": true, 00:08:59.021 "data_offset": 0, 00:08:59.021 "data_size": 65536 00:08:59.021 } 00:08:59.021 ] 00:08:59.021 } 00:08:59.021 } 00:08:59.021 }' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.021 BaseBdev2 00:08:59.021 BaseBdev3' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.021 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.280 [2024-11-26 15:24:57.577964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.280 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.280 "name": "Existed_Raid", 00:08:59.280 "uuid": "6d8f5455-cad4-4b67-979a-143f0715b353", 00:08:59.280 "strip_size_kb": 0, 00:08:59.280 "state": "online", 00:08:59.280 "raid_level": "raid1", 00:08:59.280 "superblock": false, 00:08:59.280 "num_base_bdevs": 3, 00:08:59.280 "num_base_bdevs_discovered": 2, 00:08:59.280 "num_base_bdevs_operational": 2, 00:08:59.280 "base_bdevs_list": [ 00:08:59.280 { 00:08:59.280 "name": null, 00:08:59.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.280 "is_configured": false, 00:08:59.280 "data_offset": 0, 00:08:59.280 "data_size": 65536 00:08:59.280 }, 00:08:59.280 { 00:08:59.280 "name": "BaseBdev2", 00:08:59.280 "uuid": "6c520d24-5990-414b-a36c-853fbe783368", 00:08:59.280 "is_configured": true, 00:08:59.280 "data_offset": 0, 00:08:59.280 "data_size": 65536 00:08:59.280 }, 00:08:59.280 { 00:08:59.280 "name": "BaseBdev3", 00:08:59.280 "uuid": "3b42cb34-ec87-43ad-a0a7-8960e67f3533", 00:08:59.280 "is_configured": true, 00:08:59.280 "data_offset": 0, 00:08:59.280 "data_size": 65536 00:08:59.280 } 00:08:59.281 ] 00:08:59.281 }' 00:08:59.281 15:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.281 15:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.849 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:59.849 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.849 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.849 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.849 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.849 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 [2024-11-26 15:24:58.105425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 [2024-11-26 15:24:58.168732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.850 [2024-11-26 15:24:58.168834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.850 [2024-11-26 15:24:58.180080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.850 [2024-11-26 15:24:58.180137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.850 [2024-11-26 15:24:58.180150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 BaseBdev2 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 [ 00:08:59.850 { 00:08:59.850 "name": "BaseBdev2", 00:08:59.850 "aliases": [ 00:08:59.850 "7df20897-b31c-40cc-828f-af554087f38f" 00:08:59.850 ], 00:08:59.850 "product_name": "Malloc disk", 00:08:59.850 "block_size": 512, 00:08:59.850 "num_blocks": 65536, 00:08:59.850 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:08:59.850 "assigned_rate_limits": { 00:08:59.850 "rw_ios_per_sec": 0, 00:08:59.850 "rw_mbytes_per_sec": 0, 00:08:59.850 "r_mbytes_per_sec": 0, 00:08:59.850 "w_mbytes_per_sec": 0 00:08:59.850 }, 00:08:59.850 "claimed": false, 00:08:59.850 "zoned": false, 00:08:59.850 "supported_io_types": { 00:08:59.850 "read": true, 00:08:59.850 "write": true, 00:08:59.850 "unmap": true, 00:08:59.850 "flush": true, 00:08:59.850 "reset": true, 00:08:59.850 "nvme_admin": false, 00:08:59.850 "nvme_io": false, 00:08:59.850 "nvme_io_md": false, 00:08:59.850 "write_zeroes": true, 00:08:59.850 "zcopy": true, 00:08:59.850 "get_zone_info": false, 00:08:59.850 "zone_management": false, 00:08:59.850 "zone_append": false, 00:08:59.850 "compare": false, 00:08:59.850 "compare_and_write": false, 00:08:59.850 "abort": true, 00:08:59.850 "seek_hole": false, 00:08:59.850 "seek_data": false, 00:08:59.850 "copy": true, 00:08:59.850 "nvme_iov_md": false 00:08:59.850 }, 00:08:59.850 "memory_domains": [ 00:08:59.850 { 00:08:59.850 "dma_device_id": "system", 00:08:59.850 "dma_device_type": 1 00:08:59.850 }, 00:08:59.850 { 00:08:59.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.850 "dma_device_type": 2 00:08:59.850 } 00:08:59.850 ], 00:08:59.850 "driver_specific": {} 00:08:59.850 } 00:08:59.850 ] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.850 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.850 BaseBdev3 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.851 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.113 [ 00:09:00.113 { 00:09:00.113 "name": "BaseBdev3", 00:09:00.113 "aliases": [ 00:09:00.113 "79815b70-49ea-4317-9a12-06ea9ad9a4a7" 00:09:00.113 ], 00:09:00.113 "product_name": "Malloc disk", 00:09:00.113 "block_size": 512, 00:09:00.113 "num_blocks": 65536, 00:09:00.113 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:00.113 "assigned_rate_limits": { 00:09:00.113 "rw_ios_per_sec": 0, 00:09:00.113 "rw_mbytes_per_sec": 0, 00:09:00.113 "r_mbytes_per_sec": 0, 00:09:00.113 "w_mbytes_per_sec": 0 00:09:00.113 }, 00:09:00.113 "claimed": false, 00:09:00.113 "zoned": false, 00:09:00.113 "supported_io_types": { 00:09:00.113 "read": true, 00:09:00.113 "write": true, 00:09:00.113 "unmap": true, 00:09:00.113 "flush": true, 00:09:00.113 "reset": true, 00:09:00.113 "nvme_admin": false, 00:09:00.113 "nvme_io": false, 00:09:00.113 "nvme_io_md": false, 00:09:00.113 "write_zeroes": true, 00:09:00.113 "zcopy": true, 00:09:00.113 "get_zone_info": false, 00:09:00.113 "zone_management": false, 00:09:00.113 "zone_append": false, 00:09:00.113 "compare": false, 00:09:00.113 "compare_and_write": false, 00:09:00.113 "abort": true, 00:09:00.113 "seek_hole": false, 00:09:00.113 "seek_data": false, 00:09:00.113 "copy": true, 00:09:00.113 "nvme_iov_md": false 00:09:00.113 }, 00:09:00.113 "memory_domains": [ 00:09:00.113 { 00:09:00.113 "dma_device_id": "system", 00:09:00.113 "dma_device_type": 1 00:09:00.113 }, 00:09:00.113 { 00:09:00.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.113 "dma_device_type": 2 00:09:00.113 } 00:09:00.113 ], 00:09:00.113 "driver_specific": {} 00:09:00.113 } 00:09:00.113 ] 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.113 [2024-11-26 15:24:58.344276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.113 [2024-11-26 15:24:58.344316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.113 [2024-11-26 15:24:58.344351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.113 [2024-11-26 15:24:58.346144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.113 "name": "Existed_Raid", 00:09:00.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.113 "strip_size_kb": 0, 00:09:00.113 "state": "configuring", 00:09:00.113 "raid_level": "raid1", 00:09:00.113 "superblock": false, 00:09:00.113 "num_base_bdevs": 3, 00:09:00.113 "num_base_bdevs_discovered": 2, 00:09:00.113 "num_base_bdevs_operational": 3, 00:09:00.113 "base_bdevs_list": [ 00:09:00.113 { 00:09:00.113 "name": "BaseBdev1", 00:09:00.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.113 "is_configured": false, 00:09:00.113 "data_offset": 0, 00:09:00.113 "data_size": 0 00:09:00.113 }, 00:09:00.113 { 00:09:00.113 "name": "BaseBdev2", 00:09:00.113 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:00.113 "is_configured": true, 00:09:00.113 "data_offset": 0, 00:09:00.113 "data_size": 65536 00:09:00.113 }, 00:09:00.113 { 00:09:00.113 "name": "BaseBdev3", 00:09:00.113 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:00.113 "is_configured": true, 00:09:00.113 "data_offset": 0, 00:09:00.113 "data_size": 65536 00:09:00.113 } 00:09:00.113 ] 00:09:00.113 }' 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.113 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.382 [2024-11-26 15:24:58.772407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.382 "name": "Existed_Raid", 00:09:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.382 "strip_size_kb": 0, 00:09:00.382 "state": "configuring", 00:09:00.382 "raid_level": "raid1", 00:09:00.382 "superblock": false, 00:09:00.382 "num_base_bdevs": 3, 00:09:00.382 "num_base_bdevs_discovered": 1, 00:09:00.382 "num_base_bdevs_operational": 3, 00:09:00.382 "base_bdevs_list": [ 00:09:00.382 { 00:09:00.382 "name": "BaseBdev1", 00:09:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.382 "is_configured": false, 00:09:00.382 "data_offset": 0, 00:09:00.382 "data_size": 0 00:09:00.382 }, 00:09:00.382 { 00:09:00.382 "name": null, 00:09:00.382 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:00.382 "is_configured": false, 00:09:00.382 "data_offset": 0, 00:09:00.382 "data_size": 65536 00:09:00.382 }, 00:09:00.382 { 00:09:00.382 "name": "BaseBdev3", 00:09:00.382 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:00.382 "is_configured": true, 00:09:00.382 "data_offset": 0, 00:09:00.382 "data_size": 65536 00:09:00.382 } 00:09:00.382 ] 00:09:00.382 }' 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.382 15:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 BaseBdev1 00:09:00.960 [2024-11-26 15:24:59.263507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 [ 00:09:00.960 { 00:09:00.960 "name": "BaseBdev1", 00:09:00.960 "aliases": [ 00:09:00.960 "c988aaa3-e99c-4129-b8c0-498d582d9cae" 00:09:00.960 ], 00:09:00.960 "product_name": "Malloc disk", 00:09:00.960 "block_size": 512, 00:09:00.960 "num_blocks": 65536, 00:09:00.960 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:00.960 "assigned_rate_limits": { 00:09:00.960 "rw_ios_per_sec": 0, 00:09:00.960 "rw_mbytes_per_sec": 0, 00:09:00.960 "r_mbytes_per_sec": 0, 00:09:00.960 "w_mbytes_per_sec": 0 00:09:00.960 }, 00:09:00.960 "claimed": true, 00:09:00.960 "claim_type": "exclusive_write", 00:09:00.960 "zoned": false, 00:09:00.960 "supported_io_types": { 00:09:00.960 "read": true, 00:09:00.960 "write": true, 00:09:00.960 "unmap": true, 00:09:00.960 "flush": true, 00:09:00.960 "reset": true, 00:09:00.960 "nvme_admin": false, 00:09:00.960 "nvme_io": false, 00:09:00.960 "nvme_io_md": false, 00:09:00.960 "write_zeroes": true, 00:09:00.960 "zcopy": true, 00:09:00.960 "get_zone_info": false, 00:09:00.960 "zone_management": false, 00:09:00.960 "zone_append": false, 00:09:00.960 "compare": false, 00:09:00.960 "compare_and_write": false, 00:09:00.960 "abort": true, 00:09:00.960 "seek_hole": false, 00:09:00.960 "seek_data": false, 00:09:00.960 "copy": true, 00:09:00.960 "nvme_iov_md": false 00:09:00.960 }, 00:09:00.960 "memory_domains": [ 00:09:00.960 { 00:09:00.960 "dma_device_id": "system", 00:09:00.960 "dma_device_type": 1 00:09:00.960 }, 00:09:00.960 { 00:09:00.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.960 "dma_device_type": 2 00:09:00.960 } 00:09:00.960 ], 00:09:00.960 "driver_specific": {} 00:09:00.960 } 00:09:00.960 ] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.960 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.960 "name": "Existed_Raid", 00:09:00.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.960 "strip_size_kb": 0, 00:09:00.960 "state": "configuring", 00:09:00.960 "raid_level": "raid1", 00:09:00.960 "superblock": false, 00:09:00.960 "num_base_bdevs": 3, 00:09:00.960 "num_base_bdevs_discovered": 2, 00:09:00.960 "num_base_bdevs_operational": 3, 00:09:00.960 "base_bdevs_list": [ 00:09:00.960 { 00:09:00.960 "name": "BaseBdev1", 00:09:00.960 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:00.960 "is_configured": true, 00:09:00.960 "data_offset": 0, 00:09:00.960 "data_size": 65536 00:09:00.960 }, 00:09:00.960 { 00:09:00.960 "name": null, 00:09:00.960 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:00.960 "is_configured": false, 00:09:00.960 "data_offset": 0, 00:09:00.960 "data_size": 65536 00:09:00.960 }, 00:09:00.960 { 00:09:00.961 "name": "BaseBdev3", 00:09:00.961 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:00.961 "is_configured": true, 00:09:00.961 "data_offset": 0, 00:09:00.961 "data_size": 65536 00:09:00.961 } 00:09:00.961 ] 00:09:00.961 }' 00:09:00.961 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.961 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.531 [2024-11-26 15:24:59.799714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.531 "name": "Existed_Raid", 00:09:01.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.531 "strip_size_kb": 0, 00:09:01.531 "state": "configuring", 00:09:01.531 "raid_level": "raid1", 00:09:01.531 "superblock": false, 00:09:01.531 "num_base_bdevs": 3, 00:09:01.531 "num_base_bdevs_discovered": 1, 00:09:01.531 "num_base_bdevs_operational": 3, 00:09:01.531 "base_bdevs_list": [ 00:09:01.531 { 00:09:01.531 "name": "BaseBdev1", 00:09:01.531 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:01.531 "is_configured": true, 00:09:01.531 "data_offset": 0, 00:09:01.531 "data_size": 65536 00:09:01.531 }, 00:09:01.531 { 00:09:01.531 "name": null, 00:09:01.531 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:01.531 "is_configured": false, 00:09:01.531 "data_offset": 0, 00:09:01.531 "data_size": 65536 00:09:01.531 }, 00:09:01.531 { 00:09:01.531 "name": null, 00:09:01.531 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:01.531 "is_configured": false, 00:09:01.531 "data_offset": 0, 00:09:01.531 "data_size": 65536 00:09:01.531 } 00:09:01.531 ] 00:09:01.531 }' 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.531 15:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.791 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.791 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.792 [2024-11-26 15:25:00.259867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.792 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.052 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.052 "name": "Existed_Raid", 00:09:02.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.052 "strip_size_kb": 0, 00:09:02.052 "state": "configuring", 00:09:02.052 "raid_level": "raid1", 00:09:02.052 "superblock": false, 00:09:02.053 "num_base_bdevs": 3, 00:09:02.053 "num_base_bdevs_discovered": 2, 00:09:02.053 "num_base_bdevs_operational": 3, 00:09:02.053 "base_bdevs_list": [ 00:09:02.053 { 00:09:02.053 "name": "BaseBdev1", 00:09:02.053 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:02.053 "is_configured": true, 00:09:02.053 "data_offset": 0, 00:09:02.053 "data_size": 65536 00:09:02.053 }, 00:09:02.053 { 00:09:02.053 "name": null, 00:09:02.053 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:02.053 "is_configured": false, 00:09:02.053 "data_offset": 0, 00:09:02.053 "data_size": 65536 00:09:02.053 }, 00:09:02.053 { 00:09:02.053 "name": "BaseBdev3", 00:09:02.053 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:02.053 "is_configured": true, 00:09:02.053 "data_offset": 0, 00:09:02.053 "data_size": 65536 00:09:02.053 } 00:09:02.053 ] 00:09:02.053 }' 00:09:02.053 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.053 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.313 [2024-11-26 15:25:00.736005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.313 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.573 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.573 "name": "Existed_Raid", 00:09:02.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.573 "strip_size_kb": 0, 00:09:02.573 "state": "configuring", 00:09:02.573 "raid_level": "raid1", 00:09:02.573 "superblock": false, 00:09:02.573 "num_base_bdevs": 3, 00:09:02.573 "num_base_bdevs_discovered": 1, 00:09:02.573 "num_base_bdevs_operational": 3, 00:09:02.573 "base_bdevs_list": [ 00:09:02.573 { 00:09:02.573 "name": null, 00:09:02.573 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:02.573 "is_configured": false, 00:09:02.573 "data_offset": 0, 00:09:02.573 "data_size": 65536 00:09:02.573 }, 00:09:02.573 { 00:09:02.573 "name": null, 00:09:02.573 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:02.573 "is_configured": false, 00:09:02.573 "data_offset": 0, 00:09:02.573 "data_size": 65536 00:09:02.573 }, 00:09:02.573 { 00:09:02.573 "name": "BaseBdev3", 00:09:02.573 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:02.573 "is_configured": true, 00:09:02.573 "data_offset": 0, 00:09:02.573 "data_size": 65536 00:09:02.573 } 00:09:02.573 ] 00:09:02.573 }' 00:09:02.573 15:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.573 15:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:02.832 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.833 [2024-11-26 15:25:01.202658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.833 "name": "Existed_Raid", 00:09:02.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.833 "strip_size_kb": 0, 00:09:02.833 "state": "configuring", 00:09:02.833 "raid_level": "raid1", 00:09:02.833 "superblock": false, 00:09:02.833 "num_base_bdevs": 3, 00:09:02.833 "num_base_bdevs_discovered": 2, 00:09:02.833 "num_base_bdevs_operational": 3, 00:09:02.833 "base_bdevs_list": [ 00:09:02.833 { 00:09:02.833 "name": null, 00:09:02.833 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:02.833 "is_configured": false, 00:09:02.833 "data_offset": 0, 00:09:02.833 "data_size": 65536 00:09:02.833 }, 00:09:02.833 { 00:09:02.833 "name": "BaseBdev2", 00:09:02.833 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:02.833 "is_configured": true, 00:09:02.833 "data_offset": 0, 00:09:02.833 "data_size": 65536 00:09:02.833 }, 00:09:02.833 { 00:09:02.833 "name": "BaseBdev3", 00:09:02.833 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:02.833 "is_configured": true, 00:09:02.833 "data_offset": 0, 00:09:02.833 "data_size": 65536 00:09:02.833 } 00:09:02.833 ] 00:09:02.833 }' 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.833 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c988aaa3-e99c-4129-b8c0-498d582d9cae 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.403 [2024-11-26 15:25:01.729856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:03.403 [2024-11-26 15:25:01.729898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.403 [2024-11-26 15:25:01.729910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:03.403 [2024-11-26 15:25:01.730151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:03.403 [2024-11-26 15:25:01.730314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.403 [2024-11-26 15:25:01.730324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:03.403 [2024-11-26 15:25:01.730514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.403 NewBaseBdev 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.403 [ 00:09:03.403 { 00:09:03.403 "name": "NewBaseBdev", 00:09:03.403 "aliases": [ 00:09:03.403 "c988aaa3-e99c-4129-b8c0-498d582d9cae" 00:09:03.403 ], 00:09:03.403 "product_name": "Malloc disk", 00:09:03.403 "block_size": 512, 00:09:03.403 "num_blocks": 65536, 00:09:03.403 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:03.403 "assigned_rate_limits": { 00:09:03.403 "rw_ios_per_sec": 0, 00:09:03.403 "rw_mbytes_per_sec": 0, 00:09:03.403 "r_mbytes_per_sec": 0, 00:09:03.403 "w_mbytes_per_sec": 0 00:09:03.403 }, 00:09:03.403 "claimed": true, 00:09:03.403 "claim_type": "exclusive_write", 00:09:03.403 "zoned": false, 00:09:03.403 "supported_io_types": { 00:09:03.403 "read": true, 00:09:03.403 "write": true, 00:09:03.403 "unmap": true, 00:09:03.403 "flush": true, 00:09:03.403 "reset": true, 00:09:03.403 "nvme_admin": false, 00:09:03.403 "nvme_io": false, 00:09:03.403 "nvme_io_md": false, 00:09:03.403 "write_zeroes": true, 00:09:03.403 "zcopy": true, 00:09:03.403 "get_zone_info": false, 00:09:03.403 "zone_management": false, 00:09:03.403 "zone_append": false, 00:09:03.403 "compare": false, 00:09:03.403 "compare_and_write": false, 00:09:03.403 "abort": true, 00:09:03.403 "seek_hole": false, 00:09:03.403 "seek_data": false, 00:09:03.403 "copy": true, 00:09:03.403 "nvme_iov_md": false 00:09:03.403 }, 00:09:03.403 "memory_domains": [ 00:09:03.403 { 00:09:03.403 "dma_device_id": "system", 00:09:03.403 "dma_device_type": 1 00:09:03.403 }, 00:09:03.403 { 00:09:03.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.403 "dma_device_type": 2 00:09:03.403 } 00:09:03.403 ], 00:09:03.403 "driver_specific": {} 00:09:03.403 } 00:09:03.403 ] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.403 "name": "Existed_Raid", 00:09:03.403 "uuid": "0288dbf9-842b-462a-ba2e-2ddef85beee1", 00:09:03.403 "strip_size_kb": 0, 00:09:03.403 "state": "online", 00:09:03.403 "raid_level": "raid1", 00:09:03.403 "superblock": false, 00:09:03.403 "num_base_bdevs": 3, 00:09:03.403 "num_base_bdevs_discovered": 3, 00:09:03.403 "num_base_bdevs_operational": 3, 00:09:03.403 "base_bdevs_list": [ 00:09:03.403 { 00:09:03.403 "name": "NewBaseBdev", 00:09:03.403 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:03.403 "is_configured": true, 00:09:03.403 "data_offset": 0, 00:09:03.403 "data_size": 65536 00:09:03.403 }, 00:09:03.403 { 00:09:03.403 "name": "BaseBdev2", 00:09:03.403 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:03.403 "is_configured": true, 00:09:03.403 "data_offset": 0, 00:09:03.403 "data_size": 65536 00:09:03.403 }, 00:09:03.403 { 00:09:03.403 "name": "BaseBdev3", 00:09:03.403 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:03.403 "is_configured": true, 00:09:03.403 "data_offset": 0, 00:09:03.403 "data_size": 65536 00:09:03.403 } 00:09:03.403 ] 00:09:03.403 }' 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.403 15:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.975 [2024-11-26 15:25:02.226397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.975 "name": "Existed_Raid", 00:09:03.975 "aliases": [ 00:09:03.975 "0288dbf9-842b-462a-ba2e-2ddef85beee1" 00:09:03.975 ], 00:09:03.975 "product_name": "Raid Volume", 00:09:03.975 "block_size": 512, 00:09:03.975 "num_blocks": 65536, 00:09:03.975 "uuid": "0288dbf9-842b-462a-ba2e-2ddef85beee1", 00:09:03.975 "assigned_rate_limits": { 00:09:03.975 "rw_ios_per_sec": 0, 00:09:03.975 "rw_mbytes_per_sec": 0, 00:09:03.975 "r_mbytes_per_sec": 0, 00:09:03.975 "w_mbytes_per_sec": 0 00:09:03.975 }, 00:09:03.975 "claimed": false, 00:09:03.975 "zoned": false, 00:09:03.975 "supported_io_types": { 00:09:03.975 "read": true, 00:09:03.975 "write": true, 00:09:03.975 "unmap": false, 00:09:03.975 "flush": false, 00:09:03.975 "reset": true, 00:09:03.975 "nvme_admin": false, 00:09:03.975 "nvme_io": false, 00:09:03.975 "nvme_io_md": false, 00:09:03.975 "write_zeroes": true, 00:09:03.975 "zcopy": false, 00:09:03.975 "get_zone_info": false, 00:09:03.975 "zone_management": false, 00:09:03.975 "zone_append": false, 00:09:03.975 "compare": false, 00:09:03.975 "compare_and_write": false, 00:09:03.975 "abort": false, 00:09:03.975 "seek_hole": false, 00:09:03.975 "seek_data": false, 00:09:03.975 "copy": false, 00:09:03.975 "nvme_iov_md": false 00:09:03.975 }, 00:09:03.975 "memory_domains": [ 00:09:03.975 { 00:09:03.975 "dma_device_id": "system", 00:09:03.975 "dma_device_type": 1 00:09:03.975 }, 00:09:03.975 { 00:09:03.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.975 "dma_device_type": 2 00:09:03.975 }, 00:09:03.975 { 00:09:03.975 "dma_device_id": "system", 00:09:03.975 "dma_device_type": 1 00:09:03.975 }, 00:09:03.975 { 00:09:03.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.975 "dma_device_type": 2 00:09:03.975 }, 00:09:03.975 { 00:09:03.975 "dma_device_id": "system", 00:09:03.975 "dma_device_type": 1 00:09:03.975 }, 00:09:03.975 { 00:09:03.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.975 "dma_device_type": 2 00:09:03.975 } 00:09:03.975 ], 00:09:03.975 "driver_specific": { 00:09:03.975 "raid": { 00:09:03.975 "uuid": "0288dbf9-842b-462a-ba2e-2ddef85beee1", 00:09:03.975 "strip_size_kb": 0, 00:09:03.975 "state": "online", 00:09:03.975 "raid_level": "raid1", 00:09:03.975 "superblock": false, 00:09:03.975 "num_base_bdevs": 3, 00:09:03.975 "num_base_bdevs_discovered": 3, 00:09:03.975 "num_base_bdevs_operational": 3, 00:09:03.975 "base_bdevs_list": [ 00:09:03.975 { 00:09:03.975 "name": "NewBaseBdev", 00:09:03.975 "uuid": "c988aaa3-e99c-4129-b8c0-498d582d9cae", 00:09:03.975 "is_configured": true, 00:09:03.975 "data_offset": 0, 00:09:03.975 "data_size": 65536 00:09:03.975 }, 00:09:03.975 { 00:09:03.975 "name": "BaseBdev2", 00:09:03.975 "uuid": "7df20897-b31c-40cc-828f-af554087f38f", 00:09:03.975 "is_configured": true, 00:09:03.975 "data_offset": 0, 00:09:03.975 "data_size": 65536 00:09:03.975 }, 00:09:03.975 { 00:09:03.975 "name": "BaseBdev3", 00:09:03.975 "uuid": "79815b70-49ea-4317-9a12-06ea9ad9a4a7", 00:09:03.975 "is_configured": true, 00:09:03.975 "data_offset": 0, 00:09:03.975 "data_size": 65536 00:09:03.975 } 00:09:03.975 ] 00:09:03.975 } 00:09:03.975 } 00:09:03.975 }' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:03.975 BaseBdev2 00:09:03.975 BaseBdev3' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.975 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.235 [2024-11-26 15:25:02.482118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.235 [2024-11-26 15:25:02.482145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.235 [2024-11-26 15:25:02.482251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.235 [2024-11-26 15:25:02.482498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.235 [2024-11-26 15:25:02.482515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80035 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80035 ']' 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80035 00:09:04.235 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:04.236 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.236 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80035 00:09:04.236 killing process with pid 80035 00:09:04.236 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.236 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.236 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80035' 00:09:04.236 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80035 00:09:04.236 [2024-11-26 15:25:02.526597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.236 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80035 00:09:04.236 [2024-11-26 15:25:02.557030] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.495 ************************************ 00:09:04.495 END TEST raid_state_function_test 00:09:04.495 ************************************ 00:09:04.495 15:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:04.495 00:09:04.495 real 0m8.757s 00:09:04.496 user 0m15.026s 00:09:04.496 sys 0m1.738s 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.496 15:25:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:04.496 15:25:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:04.496 15:25:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.496 15:25:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.496 ************************************ 00:09:04.496 START TEST raid_state_function_test_sb 00:09:04.496 ************************************ 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:04.496 Process raid pid: 80639 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80639 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80639' 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80639 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80639 ']' 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.496 15:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.496 [2024-11-26 15:25:02.936157] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:04.496 [2024-11-26 15:25:02.936394] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.756 [2024-11-26 15:25:03.070871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.756 [2024-11-26 15:25:03.094338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.756 [2024-11-26 15:25:03.119388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.756 [2024-11-26 15:25:03.161941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.756 [2024-11-26 15:25:03.162060] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.326 [2024-11-26 15:25:03.760710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.326 [2024-11-26 15:25:03.760834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.326 [2024-11-26 15:25:03.760869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.326 [2024-11-26 15:25:03.760891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.326 [2024-11-26 15:25:03.760914] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.326 [2024-11-26 15:25:03.760932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.326 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.586 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.586 "name": "Existed_Raid", 00:09:05.586 "uuid": "41cde004-c40a-48dc-9700-bae5386c9d18", 00:09:05.586 "strip_size_kb": 0, 00:09:05.586 "state": "configuring", 00:09:05.586 "raid_level": "raid1", 00:09:05.586 "superblock": true, 00:09:05.586 "num_base_bdevs": 3, 00:09:05.586 "num_base_bdevs_discovered": 0, 00:09:05.586 "num_base_bdevs_operational": 3, 00:09:05.586 "base_bdevs_list": [ 00:09:05.586 { 00:09:05.586 "name": "BaseBdev1", 00:09:05.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.586 "is_configured": false, 00:09:05.586 "data_offset": 0, 00:09:05.586 "data_size": 0 00:09:05.586 }, 00:09:05.586 { 00:09:05.586 "name": "BaseBdev2", 00:09:05.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.586 "is_configured": false, 00:09:05.586 "data_offset": 0, 00:09:05.586 "data_size": 0 00:09:05.586 }, 00:09:05.586 { 00:09:05.586 "name": "BaseBdev3", 00:09:05.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.586 "is_configured": false, 00:09:05.586 "data_offset": 0, 00:09:05.586 "data_size": 0 00:09:05.586 } 00:09:05.586 ] 00:09:05.586 }' 00:09:05.586 15:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.586 15:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.846 [2024-11-26 15:25:04.224770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.846 [2024-11-26 15:25:04.224865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.846 [2024-11-26 15:25:04.236797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.846 [2024-11-26 15:25:04.236870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.846 [2024-11-26 15:25:04.236899] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.846 [2024-11-26 15:25:04.236918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.846 [2024-11-26 15:25:04.236938] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.846 [2024-11-26 15:25:04.236956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.846 [2024-11-26 15:25:04.257653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.846 BaseBdev1 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.846 [ 00:09:05.846 { 00:09:05.846 "name": "BaseBdev1", 00:09:05.846 "aliases": [ 00:09:05.846 "43e966e7-dc19-4854-a6d7-693b6e9b7d99" 00:09:05.846 ], 00:09:05.846 "product_name": "Malloc disk", 00:09:05.846 "block_size": 512, 00:09:05.846 "num_blocks": 65536, 00:09:05.846 "uuid": "43e966e7-dc19-4854-a6d7-693b6e9b7d99", 00:09:05.846 "assigned_rate_limits": { 00:09:05.846 "rw_ios_per_sec": 0, 00:09:05.846 "rw_mbytes_per_sec": 0, 00:09:05.846 "r_mbytes_per_sec": 0, 00:09:05.846 "w_mbytes_per_sec": 0 00:09:05.846 }, 00:09:05.846 "claimed": true, 00:09:05.846 "claim_type": "exclusive_write", 00:09:05.846 "zoned": false, 00:09:05.846 "supported_io_types": { 00:09:05.846 "read": true, 00:09:05.846 "write": true, 00:09:05.846 "unmap": true, 00:09:05.846 "flush": true, 00:09:05.846 "reset": true, 00:09:05.846 "nvme_admin": false, 00:09:05.846 "nvme_io": false, 00:09:05.846 "nvme_io_md": false, 00:09:05.846 "write_zeroes": true, 00:09:05.846 "zcopy": true, 00:09:05.846 "get_zone_info": false, 00:09:05.846 "zone_management": false, 00:09:05.846 "zone_append": false, 00:09:05.846 "compare": false, 00:09:05.846 "compare_and_write": false, 00:09:05.846 "abort": true, 00:09:05.846 "seek_hole": false, 00:09:05.846 "seek_data": false, 00:09:05.846 "copy": true, 00:09:05.846 "nvme_iov_md": false 00:09:05.846 }, 00:09:05.846 "memory_domains": [ 00:09:05.846 { 00:09:05.846 "dma_device_id": "system", 00:09:05.846 "dma_device_type": 1 00:09:05.846 }, 00:09:05.846 { 00:09:05.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.846 "dma_device_type": 2 00:09:05.846 } 00:09:05.846 ], 00:09:05.846 "driver_specific": {} 00:09:05.846 } 00:09:05.846 ] 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.846 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.847 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.107 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.107 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.107 "name": "Existed_Raid", 00:09:06.107 "uuid": "52b7abb8-ee24-4ee4-b109-66e100c9fb13", 00:09:06.107 "strip_size_kb": 0, 00:09:06.107 "state": "configuring", 00:09:06.107 "raid_level": "raid1", 00:09:06.107 "superblock": true, 00:09:06.107 "num_base_bdevs": 3, 00:09:06.107 "num_base_bdevs_discovered": 1, 00:09:06.107 "num_base_bdevs_operational": 3, 00:09:06.107 "base_bdevs_list": [ 00:09:06.107 { 00:09:06.107 "name": "BaseBdev1", 00:09:06.107 "uuid": "43e966e7-dc19-4854-a6d7-693b6e9b7d99", 00:09:06.107 "is_configured": true, 00:09:06.107 "data_offset": 2048, 00:09:06.107 "data_size": 63488 00:09:06.107 }, 00:09:06.107 { 00:09:06.107 "name": "BaseBdev2", 00:09:06.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.107 "is_configured": false, 00:09:06.107 "data_offset": 0, 00:09:06.107 "data_size": 0 00:09:06.107 }, 00:09:06.107 { 00:09:06.107 "name": "BaseBdev3", 00:09:06.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.107 "is_configured": false, 00:09:06.107 "data_offset": 0, 00:09:06.107 "data_size": 0 00:09:06.107 } 00:09:06.107 ] 00:09:06.107 }' 00:09:06.107 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.107 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.366 [2024-11-26 15:25:04.717813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.366 [2024-11-26 15:25:04.717878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.366 [2024-11-26 15:25:04.729846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.366 [2024-11-26 15:25:04.731710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.366 [2024-11-26 15:25:04.731787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.366 [2024-11-26 15:25:04.731805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.366 [2024-11-26 15:25:04.731813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.366 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.367 "name": "Existed_Raid", 00:09:06.367 "uuid": "a1c30484-9b72-4d89-ac9c-1ce3a0d1de93", 00:09:06.367 "strip_size_kb": 0, 00:09:06.367 "state": "configuring", 00:09:06.367 "raid_level": "raid1", 00:09:06.367 "superblock": true, 00:09:06.367 "num_base_bdevs": 3, 00:09:06.367 "num_base_bdevs_discovered": 1, 00:09:06.367 "num_base_bdevs_operational": 3, 00:09:06.367 "base_bdevs_list": [ 00:09:06.367 { 00:09:06.367 "name": "BaseBdev1", 00:09:06.367 "uuid": "43e966e7-dc19-4854-a6d7-693b6e9b7d99", 00:09:06.367 "is_configured": true, 00:09:06.367 "data_offset": 2048, 00:09:06.367 "data_size": 63488 00:09:06.367 }, 00:09:06.367 { 00:09:06.367 "name": "BaseBdev2", 00:09:06.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.367 "is_configured": false, 00:09:06.367 "data_offset": 0, 00:09:06.367 "data_size": 0 00:09:06.367 }, 00:09:06.367 { 00:09:06.367 "name": "BaseBdev3", 00:09:06.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.367 "is_configured": false, 00:09:06.367 "data_offset": 0, 00:09:06.367 "data_size": 0 00:09:06.367 } 00:09:06.367 ] 00:09:06.367 }' 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.367 15:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 [2024-11-26 15:25:05.152999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.936 BaseBdev2 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.936 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.937 [ 00:09:06.937 { 00:09:06.937 "name": "BaseBdev2", 00:09:06.937 "aliases": [ 00:09:06.937 "2258af9b-1bde-4bf1-947e-b4ae2480b4bf" 00:09:06.937 ], 00:09:06.937 "product_name": "Malloc disk", 00:09:06.937 "block_size": 512, 00:09:06.937 "num_blocks": 65536, 00:09:06.937 "uuid": "2258af9b-1bde-4bf1-947e-b4ae2480b4bf", 00:09:06.937 "assigned_rate_limits": { 00:09:06.937 "rw_ios_per_sec": 0, 00:09:06.937 "rw_mbytes_per_sec": 0, 00:09:06.937 "r_mbytes_per_sec": 0, 00:09:06.937 "w_mbytes_per_sec": 0 00:09:06.937 }, 00:09:06.937 "claimed": true, 00:09:06.937 "claim_type": "exclusive_write", 00:09:06.937 "zoned": false, 00:09:06.937 "supported_io_types": { 00:09:06.937 "read": true, 00:09:06.937 "write": true, 00:09:06.937 "unmap": true, 00:09:06.937 "flush": true, 00:09:06.937 "reset": true, 00:09:06.937 "nvme_admin": false, 00:09:06.937 "nvme_io": false, 00:09:06.937 "nvme_io_md": false, 00:09:06.937 "write_zeroes": true, 00:09:06.937 "zcopy": true, 00:09:06.937 "get_zone_info": false, 00:09:06.937 "zone_management": false, 00:09:06.937 "zone_append": false, 00:09:06.937 "compare": false, 00:09:06.937 "compare_and_write": false, 00:09:06.937 "abort": true, 00:09:06.937 "seek_hole": false, 00:09:06.937 "seek_data": false, 00:09:06.937 "copy": true, 00:09:06.937 "nvme_iov_md": false 00:09:06.937 }, 00:09:06.937 "memory_domains": [ 00:09:06.937 { 00:09:06.937 "dma_device_id": "system", 00:09:06.937 "dma_device_type": 1 00:09:06.937 }, 00:09:06.937 { 00:09:06.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.937 "dma_device_type": 2 00:09:06.937 } 00:09:06.937 ], 00:09:06.937 "driver_specific": {} 00:09:06.937 } 00:09:06.937 ] 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.937 "name": "Existed_Raid", 00:09:06.937 "uuid": "a1c30484-9b72-4d89-ac9c-1ce3a0d1de93", 00:09:06.937 "strip_size_kb": 0, 00:09:06.937 "state": "configuring", 00:09:06.937 "raid_level": "raid1", 00:09:06.937 "superblock": true, 00:09:06.937 "num_base_bdevs": 3, 00:09:06.937 "num_base_bdevs_discovered": 2, 00:09:06.937 "num_base_bdevs_operational": 3, 00:09:06.937 "base_bdevs_list": [ 00:09:06.937 { 00:09:06.937 "name": "BaseBdev1", 00:09:06.937 "uuid": "43e966e7-dc19-4854-a6d7-693b6e9b7d99", 00:09:06.937 "is_configured": true, 00:09:06.937 "data_offset": 2048, 00:09:06.937 "data_size": 63488 00:09:06.937 }, 00:09:06.937 { 00:09:06.937 "name": "BaseBdev2", 00:09:06.937 "uuid": "2258af9b-1bde-4bf1-947e-b4ae2480b4bf", 00:09:06.937 "is_configured": true, 00:09:06.937 "data_offset": 2048, 00:09:06.937 "data_size": 63488 00:09:06.937 }, 00:09:06.937 { 00:09:06.937 "name": "BaseBdev3", 00:09:06.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.937 "is_configured": false, 00:09:06.937 "data_offset": 0, 00:09:06.937 "data_size": 0 00:09:06.937 } 00:09:06.937 ] 00:09:06.937 }' 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.937 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.197 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.197 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.197 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.197 [2024-11-26 15:25:05.630557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.197 [2024-11-26 15:25:05.630748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:07.197 [2024-11-26 15:25:05.630763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.197 [2024-11-26 15:25:05.631039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:07.197 [2024-11-26 15:25:05.631232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:07.198 [2024-11-26 15:25:05.631255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:07.198 BaseBdev3 00:09:07.198 [2024-11-26 15:25:05.631397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.198 [ 00:09:07.198 { 00:09:07.198 "name": "BaseBdev3", 00:09:07.198 "aliases": [ 00:09:07.198 "6fea0e16-e670-4ae0-848c-b792cc715b80" 00:09:07.198 ], 00:09:07.198 "product_name": "Malloc disk", 00:09:07.198 "block_size": 512, 00:09:07.198 "num_blocks": 65536, 00:09:07.198 "uuid": "6fea0e16-e670-4ae0-848c-b792cc715b80", 00:09:07.198 "assigned_rate_limits": { 00:09:07.198 "rw_ios_per_sec": 0, 00:09:07.198 "rw_mbytes_per_sec": 0, 00:09:07.198 "r_mbytes_per_sec": 0, 00:09:07.198 "w_mbytes_per_sec": 0 00:09:07.198 }, 00:09:07.198 "claimed": true, 00:09:07.198 "claim_type": "exclusive_write", 00:09:07.198 "zoned": false, 00:09:07.198 "supported_io_types": { 00:09:07.198 "read": true, 00:09:07.198 "write": true, 00:09:07.198 "unmap": true, 00:09:07.198 "flush": true, 00:09:07.198 "reset": true, 00:09:07.198 "nvme_admin": false, 00:09:07.198 "nvme_io": false, 00:09:07.198 "nvme_io_md": false, 00:09:07.198 "write_zeroes": true, 00:09:07.198 "zcopy": true, 00:09:07.198 "get_zone_info": false, 00:09:07.198 "zone_management": false, 00:09:07.198 "zone_append": false, 00:09:07.198 "compare": false, 00:09:07.198 "compare_and_write": false, 00:09:07.198 "abort": true, 00:09:07.198 "seek_hole": false, 00:09:07.198 "seek_data": false, 00:09:07.198 "copy": true, 00:09:07.198 "nvme_iov_md": false 00:09:07.198 }, 00:09:07.198 "memory_domains": [ 00:09:07.198 { 00:09:07.198 "dma_device_id": "system", 00:09:07.198 "dma_device_type": 1 00:09:07.198 }, 00:09:07.198 { 00:09:07.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.198 "dma_device_type": 2 00:09:07.198 } 00:09:07.198 ], 00:09:07.198 "driver_specific": {} 00:09:07.198 } 00:09:07.198 ] 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.198 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.458 "name": "Existed_Raid", 00:09:07.458 "uuid": "a1c30484-9b72-4d89-ac9c-1ce3a0d1de93", 00:09:07.458 "strip_size_kb": 0, 00:09:07.458 "state": "online", 00:09:07.458 "raid_level": "raid1", 00:09:07.458 "superblock": true, 00:09:07.458 "num_base_bdevs": 3, 00:09:07.458 "num_base_bdevs_discovered": 3, 00:09:07.458 "num_base_bdevs_operational": 3, 00:09:07.458 "base_bdevs_list": [ 00:09:07.458 { 00:09:07.458 "name": "BaseBdev1", 00:09:07.458 "uuid": "43e966e7-dc19-4854-a6d7-693b6e9b7d99", 00:09:07.458 "is_configured": true, 00:09:07.458 "data_offset": 2048, 00:09:07.458 "data_size": 63488 00:09:07.458 }, 00:09:07.458 { 00:09:07.458 "name": "BaseBdev2", 00:09:07.458 "uuid": "2258af9b-1bde-4bf1-947e-b4ae2480b4bf", 00:09:07.458 "is_configured": true, 00:09:07.458 "data_offset": 2048, 00:09:07.458 "data_size": 63488 00:09:07.458 }, 00:09:07.458 { 00:09:07.458 "name": "BaseBdev3", 00:09:07.458 "uuid": "6fea0e16-e670-4ae0-848c-b792cc715b80", 00:09:07.458 "is_configured": true, 00:09:07.458 "data_offset": 2048, 00:09:07.458 "data_size": 63488 00:09:07.458 } 00:09:07.458 ] 00:09:07.458 }' 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.458 15:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.718 [2024-11-26 15:25:06.127057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.718 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.718 "name": "Existed_Raid", 00:09:07.718 "aliases": [ 00:09:07.718 "a1c30484-9b72-4d89-ac9c-1ce3a0d1de93" 00:09:07.718 ], 00:09:07.718 "product_name": "Raid Volume", 00:09:07.718 "block_size": 512, 00:09:07.718 "num_blocks": 63488, 00:09:07.718 "uuid": "a1c30484-9b72-4d89-ac9c-1ce3a0d1de93", 00:09:07.718 "assigned_rate_limits": { 00:09:07.718 "rw_ios_per_sec": 0, 00:09:07.718 "rw_mbytes_per_sec": 0, 00:09:07.718 "r_mbytes_per_sec": 0, 00:09:07.718 "w_mbytes_per_sec": 0 00:09:07.718 }, 00:09:07.718 "claimed": false, 00:09:07.718 "zoned": false, 00:09:07.718 "supported_io_types": { 00:09:07.718 "read": true, 00:09:07.719 "write": true, 00:09:07.719 "unmap": false, 00:09:07.719 "flush": false, 00:09:07.719 "reset": true, 00:09:07.719 "nvme_admin": false, 00:09:07.719 "nvme_io": false, 00:09:07.719 "nvme_io_md": false, 00:09:07.719 "write_zeroes": true, 00:09:07.719 "zcopy": false, 00:09:07.719 "get_zone_info": false, 00:09:07.719 "zone_management": false, 00:09:07.719 "zone_append": false, 00:09:07.719 "compare": false, 00:09:07.719 "compare_and_write": false, 00:09:07.719 "abort": false, 00:09:07.719 "seek_hole": false, 00:09:07.719 "seek_data": false, 00:09:07.719 "copy": false, 00:09:07.719 "nvme_iov_md": false 00:09:07.719 }, 00:09:07.719 "memory_domains": [ 00:09:07.719 { 00:09:07.719 "dma_device_id": "system", 00:09:07.719 "dma_device_type": 1 00:09:07.719 }, 00:09:07.719 { 00:09:07.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.719 "dma_device_type": 2 00:09:07.719 }, 00:09:07.719 { 00:09:07.719 "dma_device_id": "system", 00:09:07.719 "dma_device_type": 1 00:09:07.719 }, 00:09:07.719 { 00:09:07.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.719 "dma_device_type": 2 00:09:07.719 }, 00:09:07.719 { 00:09:07.719 "dma_device_id": "system", 00:09:07.719 "dma_device_type": 1 00:09:07.719 }, 00:09:07.719 { 00:09:07.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.719 "dma_device_type": 2 00:09:07.719 } 00:09:07.719 ], 00:09:07.719 "driver_specific": { 00:09:07.719 "raid": { 00:09:07.719 "uuid": "a1c30484-9b72-4d89-ac9c-1ce3a0d1de93", 00:09:07.719 "strip_size_kb": 0, 00:09:07.719 "state": "online", 00:09:07.719 "raid_level": "raid1", 00:09:07.719 "superblock": true, 00:09:07.719 "num_base_bdevs": 3, 00:09:07.719 "num_base_bdevs_discovered": 3, 00:09:07.719 "num_base_bdevs_operational": 3, 00:09:07.719 "base_bdevs_list": [ 00:09:07.719 { 00:09:07.719 "name": "BaseBdev1", 00:09:07.719 "uuid": "43e966e7-dc19-4854-a6d7-693b6e9b7d99", 00:09:07.719 "is_configured": true, 00:09:07.719 "data_offset": 2048, 00:09:07.719 "data_size": 63488 00:09:07.719 }, 00:09:07.719 { 00:09:07.719 "name": "BaseBdev2", 00:09:07.719 "uuid": "2258af9b-1bde-4bf1-947e-b4ae2480b4bf", 00:09:07.719 "is_configured": true, 00:09:07.719 "data_offset": 2048, 00:09:07.719 "data_size": 63488 00:09:07.719 }, 00:09:07.719 { 00:09:07.719 "name": "BaseBdev3", 00:09:07.719 "uuid": "6fea0e16-e670-4ae0-848c-b792cc715b80", 00:09:07.719 "is_configured": true, 00:09:07.719 "data_offset": 2048, 00:09:07.719 "data_size": 63488 00:09:07.719 } 00:09:07.719 ] 00:09:07.719 } 00:09:07.719 } 00:09:07.719 }' 00:09:07.719 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.719 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:07.719 BaseBdev2 00:09:07.719 BaseBdev3' 00:09:07.719 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.978 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.979 [2024-11-26 15:25:06.378872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.979 "name": "Existed_Raid", 00:09:07.979 "uuid": "a1c30484-9b72-4d89-ac9c-1ce3a0d1de93", 00:09:07.979 "strip_size_kb": 0, 00:09:07.979 "state": "online", 00:09:07.979 "raid_level": "raid1", 00:09:07.979 "superblock": true, 00:09:07.979 "num_base_bdevs": 3, 00:09:07.979 "num_base_bdevs_discovered": 2, 00:09:07.979 "num_base_bdevs_operational": 2, 00:09:07.979 "base_bdevs_list": [ 00:09:07.979 { 00:09:07.979 "name": null, 00:09:07.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.979 "is_configured": false, 00:09:07.979 "data_offset": 0, 00:09:07.979 "data_size": 63488 00:09:07.979 }, 00:09:07.979 { 00:09:07.979 "name": "BaseBdev2", 00:09:07.979 "uuid": "2258af9b-1bde-4bf1-947e-b4ae2480b4bf", 00:09:07.979 "is_configured": true, 00:09:07.979 "data_offset": 2048, 00:09:07.979 "data_size": 63488 00:09:07.979 }, 00:09:07.979 { 00:09:07.979 "name": "BaseBdev3", 00:09:07.979 "uuid": "6fea0e16-e670-4ae0-848c-b792cc715b80", 00:09:07.979 "is_configured": true, 00:09:07.979 "data_offset": 2048, 00:09:07.979 "data_size": 63488 00:09:07.979 } 00:09:07.979 ] 00:09:07.979 }' 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.979 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 [2024-11-26 15:25:06.910174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 [2024-11-26 15:25:06.969343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.548 [2024-11-26 15:25:06.969438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.548 [2024-11-26 15:25:06.980898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.548 [2024-11-26 15:25:06.980956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.548 [2024-11-26 15:25:06.980967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.548 15:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.807 BaseBdev2 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.807 [ 00:09:08.807 { 00:09:08.807 "name": "BaseBdev2", 00:09:08.807 "aliases": [ 00:09:08.807 "67016115-5c9b-47cf-b5d7-ad40fd01cdc2" 00:09:08.807 ], 00:09:08.807 "product_name": "Malloc disk", 00:09:08.807 "block_size": 512, 00:09:08.807 "num_blocks": 65536, 00:09:08.807 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:08.807 "assigned_rate_limits": { 00:09:08.807 "rw_ios_per_sec": 0, 00:09:08.807 "rw_mbytes_per_sec": 0, 00:09:08.807 "r_mbytes_per_sec": 0, 00:09:08.807 "w_mbytes_per_sec": 0 00:09:08.807 }, 00:09:08.807 "claimed": false, 00:09:08.807 "zoned": false, 00:09:08.807 "supported_io_types": { 00:09:08.807 "read": true, 00:09:08.807 "write": true, 00:09:08.807 "unmap": true, 00:09:08.807 "flush": true, 00:09:08.807 "reset": true, 00:09:08.807 "nvme_admin": false, 00:09:08.807 "nvme_io": false, 00:09:08.807 "nvme_io_md": false, 00:09:08.807 "write_zeroes": true, 00:09:08.807 "zcopy": true, 00:09:08.807 "get_zone_info": false, 00:09:08.807 "zone_management": false, 00:09:08.807 "zone_append": false, 00:09:08.807 "compare": false, 00:09:08.807 "compare_and_write": false, 00:09:08.807 "abort": true, 00:09:08.807 "seek_hole": false, 00:09:08.807 "seek_data": false, 00:09:08.807 "copy": true, 00:09:08.807 "nvme_iov_md": false 00:09:08.807 }, 00:09:08.807 "memory_domains": [ 00:09:08.807 { 00:09:08.807 "dma_device_id": "system", 00:09:08.807 "dma_device_type": 1 00:09:08.807 }, 00:09:08.807 { 00:09:08.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.807 "dma_device_type": 2 00:09:08.807 } 00:09:08.807 ], 00:09:08.807 "driver_specific": {} 00:09:08.807 } 00:09:08.807 ] 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.807 BaseBdev3 00:09:08.807 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.808 [ 00:09:08.808 { 00:09:08.808 "name": "BaseBdev3", 00:09:08.808 "aliases": [ 00:09:08.808 "f1e2c130-4133-46ef-8dc8-bfaa48b02f52" 00:09:08.808 ], 00:09:08.808 "product_name": "Malloc disk", 00:09:08.808 "block_size": 512, 00:09:08.808 "num_blocks": 65536, 00:09:08.808 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:08.808 "assigned_rate_limits": { 00:09:08.808 "rw_ios_per_sec": 0, 00:09:08.808 "rw_mbytes_per_sec": 0, 00:09:08.808 "r_mbytes_per_sec": 0, 00:09:08.808 "w_mbytes_per_sec": 0 00:09:08.808 }, 00:09:08.808 "claimed": false, 00:09:08.808 "zoned": false, 00:09:08.808 "supported_io_types": { 00:09:08.808 "read": true, 00:09:08.808 "write": true, 00:09:08.808 "unmap": true, 00:09:08.808 "flush": true, 00:09:08.808 "reset": true, 00:09:08.808 "nvme_admin": false, 00:09:08.808 "nvme_io": false, 00:09:08.808 "nvme_io_md": false, 00:09:08.808 "write_zeroes": true, 00:09:08.808 "zcopy": true, 00:09:08.808 "get_zone_info": false, 00:09:08.808 "zone_management": false, 00:09:08.808 "zone_append": false, 00:09:08.808 "compare": false, 00:09:08.808 "compare_and_write": false, 00:09:08.808 "abort": true, 00:09:08.808 "seek_hole": false, 00:09:08.808 "seek_data": false, 00:09:08.808 "copy": true, 00:09:08.808 "nvme_iov_md": false 00:09:08.808 }, 00:09:08.808 "memory_domains": [ 00:09:08.808 { 00:09:08.808 "dma_device_id": "system", 00:09:08.808 "dma_device_type": 1 00:09:08.808 }, 00:09:08.808 { 00:09:08.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.808 "dma_device_type": 2 00:09:08.808 } 00:09:08.808 ], 00:09:08.808 "driver_specific": {} 00:09:08.808 } 00:09:08.808 ] 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.808 [2024-11-26 15:25:07.146094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.808 [2024-11-26 15:25:07.146203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.808 [2024-11-26 15:25:07.146249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.808 [2024-11-26 15:25:07.148006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.808 "name": "Existed_Raid", 00:09:08.808 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:08.808 "strip_size_kb": 0, 00:09:08.808 "state": "configuring", 00:09:08.808 "raid_level": "raid1", 00:09:08.808 "superblock": true, 00:09:08.808 "num_base_bdevs": 3, 00:09:08.808 "num_base_bdevs_discovered": 2, 00:09:08.808 "num_base_bdevs_operational": 3, 00:09:08.808 "base_bdevs_list": [ 00:09:08.808 { 00:09:08.808 "name": "BaseBdev1", 00:09:08.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.808 "is_configured": false, 00:09:08.808 "data_offset": 0, 00:09:08.808 "data_size": 0 00:09:08.808 }, 00:09:08.808 { 00:09:08.808 "name": "BaseBdev2", 00:09:08.808 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:08.808 "is_configured": true, 00:09:08.808 "data_offset": 2048, 00:09:08.808 "data_size": 63488 00:09:08.808 }, 00:09:08.808 { 00:09:08.808 "name": "BaseBdev3", 00:09:08.808 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:08.808 "is_configured": true, 00:09:08.808 "data_offset": 2048, 00:09:08.808 "data_size": 63488 00:09:08.808 } 00:09:08.808 ] 00:09:08.808 }' 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.808 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 [2024-11-26 15:25:07.582248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.411 "name": "Existed_Raid", 00:09:09.411 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:09.411 "strip_size_kb": 0, 00:09:09.411 "state": "configuring", 00:09:09.411 "raid_level": "raid1", 00:09:09.411 "superblock": true, 00:09:09.411 "num_base_bdevs": 3, 00:09:09.411 "num_base_bdevs_discovered": 1, 00:09:09.411 "num_base_bdevs_operational": 3, 00:09:09.411 "base_bdevs_list": [ 00:09:09.411 { 00:09:09.411 "name": "BaseBdev1", 00:09:09.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.411 "is_configured": false, 00:09:09.411 "data_offset": 0, 00:09:09.411 "data_size": 0 00:09:09.411 }, 00:09:09.411 { 00:09:09.411 "name": null, 00:09:09.411 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:09.411 "is_configured": false, 00:09:09.411 "data_offset": 0, 00:09:09.411 "data_size": 63488 00:09:09.411 }, 00:09:09.411 { 00:09:09.411 "name": "BaseBdev3", 00:09:09.411 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:09.411 "is_configured": true, 00:09:09.411 "data_offset": 2048, 00:09:09.411 "data_size": 63488 00:09:09.411 } 00:09:09.411 ] 00:09:09.411 }' 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.411 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.671 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.672 15:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.672 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.672 15:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 [2024-11-26 15:25:08.033316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.672 BaseBdev1 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 [ 00:09:09.672 { 00:09:09.672 "name": "BaseBdev1", 00:09:09.672 "aliases": [ 00:09:09.672 "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a" 00:09:09.672 ], 00:09:09.672 "product_name": "Malloc disk", 00:09:09.672 "block_size": 512, 00:09:09.672 "num_blocks": 65536, 00:09:09.672 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:09.672 "assigned_rate_limits": { 00:09:09.672 "rw_ios_per_sec": 0, 00:09:09.672 "rw_mbytes_per_sec": 0, 00:09:09.672 "r_mbytes_per_sec": 0, 00:09:09.672 "w_mbytes_per_sec": 0 00:09:09.672 }, 00:09:09.672 "claimed": true, 00:09:09.672 "claim_type": "exclusive_write", 00:09:09.672 "zoned": false, 00:09:09.672 "supported_io_types": { 00:09:09.672 "read": true, 00:09:09.672 "write": true, 00:09:09.672 "unmap": true, 00:09:09.672 "flush": true, 00:09:09.672 "reset": true, 00:09:09.672 "nvme_admin": false, 00:09:09.672 "nvme_io": false, 00:09:09.672 "nvme_io_md": false, 00:09:09.672 "write_zeroes": true, 00:09:09.672 "zcopy": true, 00:09:09.672 "get_zone_info": false, 00:09:09.672 "zone_management": false, 00:09:09.672 "zone_append": false, 00:09:09.672 "compare": false, 00:09:09.672 "compare_and_write": false, 00:09:09.672 "abort": true, 00:09:09.672 "seek_hole": false, 00:09:09.672 "seek_data": false, 00:09:09.672 "copy": true, 00:09:09.672 "nvme_iov_md": false 00:09:09.672 }, 00:09:09.672 "memory_domains": [ 00:09:09.672 { 00:09:09.672 "dma_device_id": "system", 00:09:09.672 "dma_device_type": 1 00:09:09.672 }, 00:09:09.672 { 00:09:09.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.672 "dma_device_type": 2 00:09:09.672 } 00:09:09.672 ], 00:09:09.672 "driver_specific": {} 00:09:09.672 } 00:09:09.672 ] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.672 "name": "Existed_Raid", 00:09:09.672 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:09.672 "strip_size_kb": 0, 00:09:09.672 "state": "configuring", 00:09:09.672 "raid_level": "raid1", 00:09:09.672 "superblock": true, 00:09:09.672 "num_base_bdevs": 3, 00:09:09.672 "num_base_bdevs_discovered": 2, 00:09:09.672 "num_base_bdevs_operational": 3, 00:09:09.672 "base_bdevs_list": [ 00:09:09.672 { 00:09:09.672 "name": "BaseBdev1", 00:09:09.672 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:09.672 "is_configured": true, 00:09:09.672 "data_offset": 2048, 00:09:09.672 "data_size": 63488 00:09:09.672 }, 00:09:09.672 { 00:09:09.672 "name": null, 00:09:09.672 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:09.672 "is_configured": false, 00:09:09.672 "data_offset": 0, 00:09:09.672 "data_size": 63488 00:09:09.672 }, 00:09:09.672 { 00:09:09.672 "name": "BaseBdev3", 00:09:09.672 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:09.672 "is_configured": true, 00:09:09.672 "data_offset": 2048, 00:09:09.672 "data_size": 63488 00:09:09.672 } 00:09:09.672 ] 00:09:09.672 }' 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.672 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.240 [2024-11-26 15:25:08.549531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.240 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.241 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.241 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.241 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.241 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.241 "name": "Existed_Raid", 00:09:10.241 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:10.241 "strip_size_kb": 0, 00:09:10.241 "state": "configuring", 00:09:10.241 "raid_level": "raid1", 00:09:10.241 "superblock": true, 00:09:10.241 "num_base_bdevs": 3, 00:09:10.241 "num_base_bdevs_discovered": 1, 00:09:10.241 "num_base_bdevs_operational": 3, 00:09:10.241 "base_bdevs_list": [ 00:09:10.241 { 00:09:10.241 "name": "BaseBdev1", 00:09:10.241 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:10.241 "is_configured": true, 00:09:10.241 "data_offset": 2048, 00:09:10.241 "data_size": 63488 00:09:10.241 }, 00:09:10.241 { 00:09:10.241 "name": null, 00:09:10.241 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:10.241 "is_configured": false, 00:09:10.241 "data_offset": 0, 00:09:10.241 "data_size": 63488 00:09:10.241 }, 00:09:10.241 { 00:09:10.241 "name": null, 00:09:10.241 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:10.241 "is_configured": false, 00:09:10.241 "data_offset": 0, 00:09:10.241 "data_size": 63488 00:09:10.241 } 00:09:10.241 ] 00:09:10.241 }' 00:09:10.241 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.241 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.809 15:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.809 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.809 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 15:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 [2024-11-26 15:25:09.017670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.809 "name": "Existed_Raid", 00:09:10.809 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:10.809 "strip_size_kb": 0, 00:09:10.809 "state": "configuring", 00:09:10.809 "raid_level": "raid1", 00:09:10.809 "superblock": true, 00:09:10.809 "num_base_bdevs": 3, 00:09:10.809 "num_base_bdevs_discovered": 2, 00:09:10.809 "num_base_bdevs_operational": 3, 00:09:10.809 "base_bdevs_list": [ 00:09:10.809 { 00:09:10.809 "name": "BaseBdev1", 00:09:10.809 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:10.809 "is_configured": true, 00:09:10.809 "data_offset": 2048, 00:09:10.809 "data_size": 63488 00:09:10.809 }, 00:09:10.809 { 00:09:10.809 "name": null, 00:09:10.809 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:10.809 "is_configured": false, 00:09:10.809 "data_offset": 0, 00:09:10.809 "data_size": 63488 00:09:10.809 }, 00:09:10.809 { 00:09:10.809 "name": "BaseBdev3", 00:09:10.809 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:10.809 "is_configured": true, 00:09:10.809 "data_offset": 2048, 00:09:10.809 "data_size": 63488 00:09:10.809 } 00:09:10.809 ] 00:09:10.809 }' 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.809 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.067 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.067 [2024-11-26 15:25:09.529839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.327 "name": "Existed_Raid", 00:09:11.327 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:11.327 "strip_size_kb": 0, 00:09:11.327 "state": "configuring", 00:09:11.327 "raid_level": "raid1", 00:09:11.327 "superblock": true, 00:09:11.327 "num_base_bdevs": 3, 00:09:11.327 "num_base_bdevs_discovered": 1, 00:09:11.327 "num_base_bdevs_operational": 3, 00:09:11.327 "base_bdevs_list": [ 00:09:11.327 { 00:09:11.327 "name": null, 00:09:11.327 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:11.327 "is_configured": false, 00:09:11.327 "data_offset": 0, 00:09:11.327 "data_size": 63488 00:09:11.327 }, 00:09:11.327 { 00:09:11.327 "name": null, 00:09:11.327 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:11.327 "is_configured": false, 00:09:11.327 "data_offset": 0, 00:09:11.327 "data_size": 63488 00:09:11.327 }, 00:09:11.327 { 00:09:11.327 "name": "BaseBdev3", 00:09:11.327 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:11.327 "is_configured": true, 00:09:11.327 "data_offset": 2048, 00:09:11.327 "data_size": 63488 00:09:11.327 } 00:09:11.327 ] 00:09:11.327 }' 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.327 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.587 15:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.587 [2024-11-26 15:25:09.996343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.587 "name": "Existed_Raid", 00:09:11.587 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:11.587 "strip_size_kb": 0, 00:09:11.587 "state": "configuring", 00:09:11.587 "raid_level": "raid1", 00:09:11.587 "superblock": true, 00:09:11.587 "num_base_bdevs": 3, 00:09:11.587 "num_base_bdevs_discovered": 2, 00:09:11.587 "num_base_bdevs_operational": 3, 00:09:11.587 "base_bdevs_list": [ 00:09:11.587 { 00:09:11.587 "name": null, 00:09:11.587 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:11.587 "is_configured": false, 00:09:11.587 "data_offset": 0, 00:09:11.587 "data_size": 63488 00:09:11.587 }, 00:09:11.587 { 00:09:11.587 "name": "BaseBdev2", 00:09:11.587 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:11.587 "is_configured": true, 00:09:11.587 "data_offset": 2048, 00:09:11.587 "data_size": 63488 00:09:11.587 }, 00:09:11.587 { 00:09:11.587 "name": "BaseBdev3", 00:09:11.587 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:11.587 "is_configured": true, 00:09:11.587 "data_offset": 2048, 00:09:11.587 "data_size": 63488 00:09:11.587 } 00:09:11.587 ] 00:09:11.587 }' 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.587 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6e94fc23-c8d6-4153-8443-f0b5bd0fa93a 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.157 [2024-11-26 15:25:10.539449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.157 NewBaseBdev 00:09:12.157 [2024-11-26 15:25:10.539676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:12.157 [2024-11-26 15:25:10.539701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.157 [2024-11-26 15:25:10.539919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:12.157 [2024-11-26 15:25:10.540046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:12.157 [2024-11-26 15:25:10.540055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:12.157 [2024-11-26 15:25:10.540153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.157 [ 00:09:12.157 { 00:09:12.157 "name": "NewBaseBdev", 00:09:12.157 "aliases": [ 00:09:12.157 "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a" 00:09:12.157 ], 00:09:12.157 "product_name": "Malloc disk", 00:09:12.157 "block_size": 512, 00:09:12.157 "num_blocks": 65536, 00:09:12.157 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:12.157 "assigned_rate_limits": { 00:09:12.157 "rw_ios_per_sec": 0, 00:09:12.157 "rw_mbytes_per_sec": 0, 00:09:12.157 "r_mbytes_per_sec": 0, 00:09:12.157 "w_mbytes_per_sec": 0 00:09:12.157 }, 00:09:12.157 "claimed": true, 00:09:12.157 "claim_type": "exclusive_write", 00:09:12.157 "zoned": false, 00:09:12.157 "supported_io_types": { 00:09:12.157 "read": true, 00:09:12.157 "write": true, 00:09:12.157 "unmap": true, 00:09:12.157 "flush": true, 00:09:12.157 "reset": true, 00:09:12.157 "nvme_admin": false, 00:09:12.157 "nvme_io": false, 00:09:12.157 "nvme_io_md": false, 00:09:12.157 "write_zeroes": true, 00:09:12.157 "zcopy": true, 00:09:12.157 "get_zone_info": false, 00:09:12.157 "zone_management": false, 00:09:12.157 "zone_append": false, 00:09:12.157 "compare": false, 00:09:12.157 "compare_and_write": false, 00:09:12.157 "abort": true, 00:09:12.157 "seek_hole": false, 00:09:12.157 "seek_data": false, 00:09:12.157 "copy": true, 00:09:12.157 "nvme_iov_md": false 00:09:12.157 }, 00:09:12.157 "memory_domains": [ 00:09:12.157 { 00:09:12.157 "dma_device_id": "system", 00:09:12.157 "dma_device_type": 1 00:09:12.157 }, 00:09:12.157 { 00:09:12.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.157 "dma_device_type": 2 00:09:12.157 } 00:09:12.157 ], 00:09:12.157 "driver_specific": {} 00:09:12.157 } 00:09:12.157 ] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.157 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.158 "name": "Existed_Raid", 00:09:12.158 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:12.158 "strip_size_kb": 0, 00:09:12.158 "state": "online", 00:09:12.158 "raid_level": "raid1", 00:09:12.158 "superblock": true, 00:09:12.158 "num_base_bdevs": 3, 00:09:12.158 "num_base_bdevs_discovered": 3, 00:09:12.158 "num_base_bdevs_operational": 3, 00:09:12.158 "base_bdevs_list": [ 00:09:12.158 { 00:09:12.158 "name": "NewBaseBdev", 00:09:12.158 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:12.158 "is_configured": true, 00:09:12.158 "data_offset": 2048, 00:09:12.158 "data_size": 63488 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "name": "BaseBdev2", 00:09:12.158 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:12.158 "is_configured": true, 00:09:12.158 "data_offset": 2048, 00:09:12.158 "data_size": 63488 00:09:12.158 }, 00:09:12.158 { 00:09:12.158 "name": "BaseBdev3", 00:09:12.158 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:12.158 "is_configured": true, 00:09:12.158 "data_offset": 2048, 00:09:12.158 "data_size": 63488 00:09:12.158 } 00:09:12.158 ] 00:09:12.158 }' 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.158 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.725 15:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.725 [2024-11-26 15:25:10.991936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.725 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.725 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.725 "name": "Existed_Raid", 00:09:12.725 "aliases": [ 00:09:12.725 "43eacfba-d8ce-407d-b0a7-3e71fa20505c" 00:09:12.725 ], 00:09:12.725 "product_name": "Raid Volume", 00:09:12.725 "block_size": 512, 00:09:12.725 "num_blocks": 63488, 00:09:12.725 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:12.725 "assigned_rate_limits": { 00:09:12.725 "rw_ios_per_sec": 0, 00:09:12.725 "rw_mbytes_per_sec": 0, 00:09:12.725 "r_mbytes_per_sec": 0, 00:09:12.725 "w_mbytes_per_sec": 0 00:09:12.725 }, 00:09:12.725 "claimed": false, 00:09:12.725 "zoned": false, 00:09:12.725 "supported_io_types": { 00:09:12.725 "read": true, 00:09:12.725 "write": true, 00:09:12.725 "unmap": false, 00:09:12.725 "flush": false, 00:09:12.725 "reset": true, 00:09:12.725 "nvme_admin": false, 00:09:12.725 "nvme_io": false, 00:09:12.725 "nvme_io_md": false, 00:09:12.725 "write_zeroes": true, 00:09:12.725 "zcopy": false, 00:09:12.725 "get_zone_info": false, 00:09:12.725 "zone_management": false, 00:09:12.725 "zone_append": false, 00:09:12.725 "compare": false, 00:09:12.725 "compare_and_write": false, 00:09:12.725 "abort": false, 00:09:12.725 "seek_hole": false, 00:09:12.725 "seek_data": false, 00:09:12.725 "copy": false, 00:09:12.725 "nvme_iov_md": false 00:09:12.725 }, 00:09:12.725 "memory_domains": [ 00:09:12.725 { 00:09:12.725 "dma_device_id": "system", 00:09:12.725 "dma_device_type": 1 00:09:12.726 }, 00:09:12.726 { 00:09:12.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.726 "dma_device_type": 2 00:09:12.726 }, 00:09:12.726 { 00:09:12.726 "dma_device_id": "system", 00:09:12.726 "dma_device_type": 1 00:09:12.726 }, 00:09:12.726 { 00:09:12.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.726 "dma_device_type": 2 00:09:12.726 }, 00:09:12.726 { 00:09:12.726 "dma_device_id": "system", 00:09:12.726 "dma_device_type": 1 00:09:12.726 }, 00:09:12.726 { 00:09:12.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.726 "dma_device_type": 2 00:09:12.726 } 00:09:12.726 ], 00:09:12.726 "driver_specific": { 00:09:12.726 "raid": { 00:09:12.726 "uuid": "43eacfba-d8ce-407d-b0a7-3e71fa20505c", 00:09:12.726 "strip_size_kb": 0, 00:09:12.726 "state": "online", 00:09:12.726 "raid_level": "raid1", 00:09:12.726 "superblock": true, 00:09:12.726 "num_base_bdevs": 3, 00:09:12.726 "num_base_bdevs_discovered": 3, 00:09:12.726 "num_base_bdevs_operational": 3, 00:09:12.726 "base_bdevs_list": [ 00:09:12.726 { 00:09:12.726 "name": "NewBaseBdev", 00:09:12.726 "uuid": "6e94fc23-c8d6-4153-8443-f0b5bd0fa93a", 00:09:12.726 "is_configured": true, 00:09:12.726 "data_offset": 2048, 00:09:12.726 "data_size": 63488 00:09:12.726 }, 00:09:12.726 { 00:09:12.726 "name": "BaseBdev2", 00:09:12.726 "uuid": "67016115-5c9b-47cf-b5d7-ad40fd01cdc2", 00:09:12.726 "is_configured": true, 00:09:12.726 "data_offset": 2048, 00:09:12.726 "data_size": 63488 00:09:12.726 }, 00:09:12.726 { 00:09:12.726 "name": "BaseBdev3", 00:09:12.726 "uuid": "f1e2c130-4133-46ef-8dc8-bfaa48b02f52", 00:09:12.726 "is_configured": true, 00:09:12.726 "data_offset": 2048, 00:09:12.726 "data_size": 63488 00:09:12.726 } 00:09:12.726 ] 00:09:12.726 } 00:09:12.726 } 00:09:12.726 }' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.726 BaseBdev2 00:09:12.726 BaseBdev3' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.726 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.985 [2024-11-26 15:25:11.255688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.985 [2024-11-26 15:25:11.255714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.985 [2024-11-26 15:25:11.255783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.985 [2024-11-26 15:25:11.256023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.985 [2024-11-26 15:25:11.256035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80639 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80639 ']' 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80639 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80639 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80639' 00:09:12.985 killing process with pid 80639 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80639 00:09:12.985 [2024-11-26 15:25:11.305504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.985 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80639 00:09:12.985 [2024-11-26 15:25:11.337153] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.243 15:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:13.243 00:09:13.243 real 0m8.708s 00:09:13.243 user 0m14.952s 00:09:13.243 sys 0m1.687s 00:09:13.243 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.243 15:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.243 ************************************ 00:09:13.243 END TEST raid_state_function_test_sb 00:09:13.243 ************************************ 00:09:13.243 15:25:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:13.243 15:25:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.243 15:25:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.243 15:25:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.243 ************************************ 00:09:13.243 START TEST raid_superblock_test 00:09:13.243 ************************************ 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81243 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81243 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81243 ']' 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.243 15:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.243 [2024-11-26 15:25:11.706837] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:13.243 [2024-11-26 15:25:11.706968] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81243 ] 00:09:13.502 [2024-11-26 15:25:11.840380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:13.502 [2024-11-26 15:25:11.877739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.502 [2024-11-26 15:25:11.903202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.502 [2024-11-26 15:25:11.945996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.502 [2024-11-26 15:25:11.946039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.070 malloc1 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.070 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 [2024-11-26 15:25:12.549650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:14.331 [2024-11-26 15:25:12.549773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.331 [2024-11-26 15:25:12.549818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:14.331 [2024-11-26 15:25:12.549852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.331 [2024-11-26 15:25:12.551955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.331 [2024-11-26 15:25:12.552019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:14.331 pt1 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 malloc2 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 [2024-11-26 15:25:12.582274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.331 [2024-11-26 15:25:12.582324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.331 [2024-11-26 15:25:12.582341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:14.331 [2024-11-26 15:25:12.582349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.331 [2024-11-26 15:25:12.584335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.331 [2024-11-26 15:25:12.584417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.331 pt2 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 malloc3 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 [2024-11-26 15:25:12.610914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:14.331 [2024-11-26 15:25:12.611002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.331 [2024-11-26 15:25:12.611053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:14.331 [2024-11-26 15:25:12.611080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.331 [2024-11-26 15:25:12.613120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.331 [2024-11-26 15:25:12.613212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:14.331 pt3 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.331 [2024-11-26 15:25:12.622947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:14.331 [2024-11-26 15:25:12.624843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.331 [2024-11-26 15:25:12.624943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:14.331 [2024-11-26 15:25:12.625113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:14.331 [2024-11-26 15:25:12.625164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:14.331 [2024-11-26 15:25:12.625432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:14.331 [2024-11-26 15:25:12.625608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:14.331 [2024-11-26 15:25:12.625649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:14.331 [2024-11-26 15:25:12.625796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.331 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.332 "name": "raid_bdev1", 00:09:14.332 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:14.332 "strip_size_kb": 0, 00:09:14.332 "state": "online", 00:09:14.332 "raid_level": "raid1", 00:09:14.332 "superblock": true, 00:09:14.332 "num_base_bdevs": 3, 00:09:14.332 "num_base_bdevs_discovered": 3, 00:09:14.332 "num_base_bdevs_operational": 3, 00:09:14.332 "base_bdevs_list": [ 00:09:14.332 { 00:09:14.332 "name": "pt1", 00:09:14.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.332 "is_configured": true, 00:09:14.332 "data_offset": 2048, 00:09:14.332 "data_size": 63488 00:09:14.332 }, 00:09:14.332 { 00:09:14.332 "name": "pt2", 00:09:14.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.332 "is_configured": true, 00:09:14.332 "data_offset": 2048, 00:09:14.332 "data_size": 63488 00:09:14.332 }, 00:09:14.332 { 00:09:14.332 "name": "pt3", 00:09:14.332 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.332 "is_configured": true, 00:09:14.332 "data_offset": 2048, 00:09:14.332 "data_size": 63488 00:09:14.332 } 00:09:14.332 ] 00:09:14.332 }' 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.332 15:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.592 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.592 [2024-11-26 15:25:13.051325] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.852 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.852 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.852 "name": "raid_bdev1", 00:09:14.852 "aliases": [ 00:09:14.852 "0f307ea9-9065-40f0-b91c-345cf31b99be" 00:09:14.852 ], 00:09:14.852 "product_name": "Raid Volume", 00:09:14.852 "block_size": 512, 00:09:14.852 "num_blocks": 63488, 00:09:14.852 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:14.852 "assigned_rate_limits": { 00:09:14.852 "rw_ios_per_sec": 0, 00:09:14.852 "rw_mbytes_per_sec": 0, 00:09:14.852 "r_mbytes_per_sec": 0, 00:09:14.852 "w_mbytes_per_sec": 0 00:09:14.852 }, 00:09:14.852 "claimed": false, 00:09:14.852 "zoned": false, 00:09:14.852 "supported_io_types": { 00:09:14.852 "read": true, 00:09:14.852 "write": true, 00:09:14.852 "unmap": false, 00:09:14.852 "flush": false, 00:09:14.852 "reset": true, 00:09:14.852 "nvme_admin": false, 00:09:14.852 "nvme_io": false, 00:09:14.852 "nvme_io_md": false, 00:09:14.852 "write_zeroes": true, 00:09:14.852 "zcopy": false, 00:09:14.852 "get_zone_info": false, 00:09:14.852 "zone_management": false, 00:09:14.852 "zone_append": false, 00:09:14.852 "compare": false, 00:09:14.852 "compare_and_write": false, 00:09:14.852 "abort": false, 00:09:14.852 "seek_hole": false, 00:09:14.852 "seek_data": false, 00:09:14.852 "copy": false, 00:09:14.852 "nvme_iov_md": false 00:09:14.852 }, 00:09:14.852 "memory_domains": [ 00:09:14.852 { 00:09:14.852 "dma_device_id": "system", 00:09:14.852 "dma_device_type": 1 00:09:14.852 }, 00:09:14.852 { 00:09:14.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.852 "dma_device_type": 2 00:09:14.852 }, 00:09:14.852 { 00:09:14.852 "dma_device_id": "system", 00:09:14.852 "dma_device_type": 1 00:09:14.852 }, 00:09:14.852 { 00:09:14.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.852 "dma_device_type": 2 00:09:14.852 }, 00:09:14.852 { 00:09:14.852 "dma_device_id": "system", 00:09:14.852 "dma_device_type": 1 00:09:14.852 }, 00:09:14.852 { 00:09:14.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.852 "dma_device_type": 2 00:09:14.852 } 00:09:14.852 ], 00:09:14.853 "driver_specific": { 00:09:14.853 "raid": { 00:09:14.853 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:14.853 "strip_size_kb": 0, 00:09:14.853 "state": "online", 00:09:14.853 "raid_level": "raid1", 00:09:14.853 "superblock": true, 00:09:14.853 "num_base_bdevs": 3, 00:09:14.853 "num_base_bdevs_discovered": 3, 00:09:14.853 "num_base_bdevs_operational": 3, 00:09:14.853 "base_bdevs_list": [ 00:09:14.853 { 00:09:14.853 "name": "pt1", 00:09:14.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.853 "is_configured": true, 00:09:14.853 "data_offset": 2048, 00:09:14.853 "data_size": 63488 00:09:14.853 }, 00:09:14.853 { 00:09:14.853 "name": "pt2", 00:09:14.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.853 "is_configured": true, 00:09:14.853 "data_offset": 2048, 00:09:14.853 "data_size": 63488 00:09:14.853 }, 00:09:14.853 { 00:09:14.853 "name": "pt3", 00:09:14.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.853 "is_configured": true, 00:09:14.853 "data_offset": 2048, 00:09:14.853 "data_size": 63488 00:09:14.853 } 00:09:14.853 ] 00:09:14.853 } 00:09:14.853 } 00:09:14.853 }' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:14.853 pt2 00:09:14.853 pt3' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.853 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.113 [2024-11-26 15:25:13.347462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0f307ea9-9065-40f0-b91c-345cf31b99be 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0f307ea9-9065-40f0-b91c-345cf31b99be ']' 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.113 [2024-11-26 15:25:13.391125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.113 [2024-11-26 15:25:13.391155] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.113 [2024-11-26 15:25:13.391240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.113 [2024-11-26 15:25:13.391316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.113 [2024-11-26 15:25:13.391325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.113 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.114 [2024-11-26 15:25:13.547183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:15.114 [2024-11-26 15:25:13.549056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:15.114 [2024-11-26 15:25:13.549144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:15.114 [2024-11-26 15:25:13.549223] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:15.114 [2024-11-26 15:25:13.549307] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:15.114 [2024-11-26 15:25:13.549361] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:15.114 [2024-11-26 15:25:13.549411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.114 [2024-11-26 15:25:13.549438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:15.114 request: 00:09:15.114 { 00:09:15.114 "name": "raid_bdev1", 00:09:15.114 "raid_level": "raid1", 00:09:15.114 "base_bdevs": [ 00:09:15.114 "malloc1", 00:09:15.114 "malloc2", 00:09:15.114 "malloc3" 00:09:15.114 ], 00:09:15.114 "superblock": false, 00:09:15.114 "method": "bdev_raid_create", 00:09:15.114 "req_id": 1 00:09:15.114 } 00:09:15.114 Got JSON-RPC error response 00:09:15.114 response: 00:09:15.114 { 00:09:15.114 "code": -17, 00:09:15.114 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:15.114 } 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:15.114 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.374 [2024-11-26 15:25:13.615160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:15.374 [2024-11-26 15:25:13.615276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.374 [2024-11-26 15:25:13.615316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:15.374 [2024-11-26 15:25:13.615343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.374 [2024-11-26 15:25:13.617395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.374 [2024-11-26 15:25:13.617464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:15.374 [2024-11-26 15:25:13.617551] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:15.374 [2024-11-26 15:25:13.617617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:15.374 pt1 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.374 "name": "raid_bdev1", 00:09:15.374 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:15.374 "strip_size_kb": 0, 00:09:15.374 "state": "configuring", 00:09:15.374 "raid_level": "raid1", 00:09:15.374 "superblock": true, 00:09:15.374 "num_base_bdevs": 3, 00:09:15.374 "num_base_bdevs_discovered": 1, 00:09:15.374 "num_base_bdevs_operational": 3, 00:09:15.374 "base_bdevs_list": [ 00:09:15.374 { 00:09:15.374 "name": "pt1", 00:09:15.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.374 "is_configured": true, 00:09:15.374 "data_offset": 2048, 00:09:15.374 "data_size": 63488 00:09:15.374 }, 00:09:15.374 { 00:09:15.374 "name": null, 00:09:15.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.374 "is_configured": false, 00:09:15.374 "data_offset": 2048, 00:09:15.374 "data_size": 63488 00:09:15.374 }, 00:09:15.374 { 00:09:15.374 "name": null, 00:09:15.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.374 "is_configured": false, 00:09:15.374 "data_offset": 2048, 00:09:15.374 "data_size": 63488 00:09:15.374 } 00:09:15.374 ] 00:09:15.374 }' 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.374 15:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.634 [2024-11-26 15:25:14.067293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:15.634 [2024-11-26 15:25:14.067393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.634 [2024-11-26 15:25:14.067434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:15.634 [2024-11-26 15:25:14.067462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.634 [2024-11-26 15:25:14.067877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.634 [2024-11-26 15:25:14.067932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:15.634 [2024-11-26 15:25:14.068025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:15.634 [2024-11-26 15:25:14.068073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:15.634 pt2 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.634 [2024-11-26 15:25:14.079339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:15.634 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.635 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.895 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.895 "name": "raid_bdev1", 00:09:15.895 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:15.895 "strip_size_kb": 0, 00:09:15.895 "state": "configuring", 00:09:15.895 "raid_level": "raid1", 00:09:15.895 "superblock": true, 00:09:15.895 "num_base_bdevs": 3, 00:09:15.895 "num_base_bdevs_discovered": 1, 00:09:15.895 "num_base_bdevs_operational": 3, 00:09:15.895 "base_bdevs_list": [ 00:09:15.895 { 00:09:15.895 "name": "pt1", 00:09:15.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.895 "is_configured": true, 00:09:15.895 "data_offset": 2048, 00:09:15.895 "data_size": 63488 00:09:15.895 }, 00:09:15.895 { 00:09:15.895 "name": null, 00:09:15.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.895 "is_configured": false, 00:09:15.895 "data_offset": 0, 00:09:15.895 "data_size": 63488 00:09:15.895 }, 00:09:15.895 { 00:09:15.895 "name": null, 00:09:15.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.895 "is_configured": false, 00:09:15.895 "data_offset": 2048, 00:09:15.895 "data_size": 63488 00:09:15.895 } 00:09:15.895 ] 00:09:15.895 }' 00:09:15.895 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.895 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.155 [2024-11-26 15:25:14.543460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.155 [2024-11-26 15:25:14.543582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.155 [2024-11-26 15:25:14.543607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:16.155 [2024-11-26 15:25:14.543630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.155 [2024-11-26 15:25:14.544031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.155 [2024-11-26 15:25:14.544052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.155 [2024-11-26 15:25:14.544125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:16.155 [2024-11-26 15:25:14.544156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.155 pt2 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.155 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.155 [2024-11-26 15:25:14.555421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.155 [2024-11-26 15:25:14.555472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.155 [2024-11-26 15:25:14.555502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:16.155 [2024-11-26 15:25:14.555512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.156 [2024-11-26 15:25:14.555830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.156 [2024-11-26 15:25:14.555848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.156 [2024-11-26 15:25:14.555901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:16.156 [2024-11-26 15:25:14.555921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.156 [2024-11-26 15:25:14.556009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:16.156 [2024-11-26 15:25:14.556020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:16.156 [2024-11-26 15:25:14.556256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:16.156 [2024-11-26 15:25:14.556376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:16.156 [2024-11-26 15:25:14.556390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:16.156 [2024-11-26 15:25:14.556492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.156 pt3 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.156 "name": "raid_bdev1", 00:09:16.156 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:16.156 "strip_size_kb": 0, 00:09:16.156 "state": "online", 00:09:16.156 "raid_level": "raid1", 00:09:16.156 "superblock": true, 00:09:16.156 "num_base_bdevs": 3, 00:09:16.156 "num_base_bdevs_discovered": 3, 00:09:16.156 "num_base_bdevs_operational": 3, 00:09:16.156 "base_bdevs_list": [ 00:09:16.156 { 00:09:16.156 "name": "pt1", 00:09:16.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.156 "is_configured": true, 00:09:16.156 "data_offset": 2048, 00:09:16.156 "data_size": 63488 00:09:16.156 }, 00:09:16.156 { 00:09:16.156 "name": "pt2", 00:09:16.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.156 "is_configured": true, 00:09:16.156 "data_offset": 2048, 00:09:16.156 "data_size": 63488 00:09:16.156 }, 00:09:16.156 { 00:09:16.156 "name": "pt3", 00:09:16.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.156 "is_configured": true, 00:09:16.156 "data_offset": 2048, 00:09:16.156 "data_size": 63488 00:09:16.156 } 00:09:16.156 ] 00:09:16.156 }' 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.156 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.727 15:25:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.727 [2024-11-26 15:25:14.999819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.727 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.727 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.727 "name": "raid_bdev1", 00:09:16.727 "aliases": [ 00:09:16.727 "0f307ea9-9065-40f0-b91c-345cf31b99be" 00:09:16.727 ], 00:09:16.727 "product_name": "Raid Volume", 00:09:16.727 "block_size": 512, 00:09:16.727 "num_blocks": 63488, 00:09:16.727 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:16.727 "assigned_rate_limits": { 00:09:16.727 "rw_ios_per_sec": 0, 00:09:16.727 "rw_mbytes_per_sec": 0, 00:09:16.727 "r_mbytes_per_sec": 0, 00:09:16.727 "w_mbytes_per_sec": 0 00:09:16.727 }, 00:09:16.727 "claimed": false, 00:09:16.727 "zoned": false, 00:09:16.727 "supported_io_types": { 00:09:16.727 "read": true, 00:09:16.727 "write": true, 00:09:16.727 "unmap": false, 00:09:16.727 "flush": false, 00:09:16.727 "reset": true, 00:09:16.727 "nvme_admin": false, 00:09:16.727 "nvme_io": false, 00:09:16.727 "nvme_io_md": false, 00:09:16.727 "write_zeroes": true, 00:09:16.727 "zcopy": false, 00:09:16.727 "get_zone_info": false, 00:09:16.727 "zone_management": false, 00:09:16.727 "zone_append": false, 00:09:16.727 "compare": false, 00:09:16.727 "compare_and_write": false, 00:09:16.727 "abort": false, 00:09:16.727 "seek_hole": false, 00:09:16.727 "seek_data": false, 00:09:16.727 "copy": false, 00:09:16.727 "nvme_iov_md": false 00:09:16.727 }, 00:09:16.727 "memory_domains": [ 00:09:16.727 { 00:09:16.727 "dma_device_id": "system", 00:09:16.727 "dma_device_type": 1 00:09:16.727 }, 00:09:16.727 { 00:09:16.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.727 "dma_device_type": 2 00:09:16.727 }, 00:09:16.727 { 00:09:16.727 "dma_device_id": "system", 00:09:16.727 "dma_device_type": 1 00:09:16.727 }, 00:09:16.727 { 00:09:16.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.727 "dma_device_type": 2 00:09:16.727 }, 00:09:16.727 { 00:09:16.727 "dma_device_id": "system", 00:09:16.728 "dma_device_type": 1 00:09:16.728 }, 00:09:16.728 { 00:09:16.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.728 "dma_device_type": 2 00:09:16.728 } 00:09:16.728 ], 00:09:16.728 "driver_specific": { 00:09:16.728 "raid": { 00:09:16.728 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:16.728 "strip_size_kb": 0, 00:09:16.728 "state": "online", 00:09:16.728 "raid_level": "raid1", 00:09:16.728 "superblock": true, 00:09:16.728 "num_base_bdevs": 3, 00:09:16.728 "num_base_bdevs_discovered": 3, 00:09:16.728 "num_base_bdevs_operational": 3, 00:09:16.728 "base_bdevs_list": [ 00:09:16.728 { 00:09:16.728 "name": "pt1", 00:09:16.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.728 "is_configured": true, 00:09:16.728 "data_offset": 2048, 00:09:16.728 "data_size": 63488 00:09:16.728 }, 00:09:16.728 { 00:09:16.728 "name": "pt2", 00:09:16.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.728 "is_configured": true, 00:09:16.728 "data_offset": 2048, 00:09:16.728 "data_size": 63488 00:09:16.728 }, 00:09:16.728 { 00:09:16.728 "name": "pt3", 00:09:16.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.728 "is_configured": true, 00:09:16.728 "data_offset": 2048, 00:09:16.728 "data_size": 63488 00:09:16.728 } 00:09:16.728 ] 00:09:16.728 } 00:09:16.728 } 00:09:16.728 }' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:16.728 pt2 00:09:16.728 pt3' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.728 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.990 [2024-11-26 15:25:15.235850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0f307ea9-9065-40f0-b91c-345cf31b99be '!=' 0f307ea9-9065-40f0-b91c-345cf31b99be ']' 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.990 [2024-11-26 15:25:15.275640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.990 "name": "raid_bdev1", 00:09:16.990 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:16.990 "strip_size_kb": 0, 00:09:16.990 "state": "online", 00:09:16.990 "raid_level": "raid1", 00:09:16.990 "superblock": true, 00:09:16.990 "num_base_bdevs": 3, 00:09:16.990 "num_base_bdevs_discovered": 2, 00:09:16.990 "num_base_bdevs_operational": 2, 00:09:16.990 "base_bdevs_list": [ 00:09:16.990 { 00:09:16.990 "name": null, 00:09:16.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.990 "is_configured": false, 00:09:16.990 "data_offset": 0, 00:09:16.990 "data_size": 63488 00:09:16.990 }, 00:09:16.990 { 00:09:16.990 "name": "pt2", 00:09:16.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.990 "is_configured": true, 00:09:16.990 "data_offset": 2048, 00:09:16.990 "data_size": 63488 00:09:16.990 }, 00:09:16.990 { 00:09:16.990 "name": "pt3", 00:09:16.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.990 "is_configured": true, 00:09:16.990 "data_offset": 2048, 00:09:16.990 "data_size": 63488 00:09:16.990 } 00:09:16.990 ] 00:09:16.990 }' 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.990 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.250 [2024-11-26 15:25:15.667719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:17.250 [2024-11-26 15:25:15.667789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.250 [2024-11-26 15:25:15.667878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.250 [2024-11-26 15:25:15.667950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.250 [2024-11-26 15:25:15.668003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.250 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.510 [2024-11-26 15:25:15.735738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.510 [2024-11-26 15:25:15.735791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.510 [2024-11-26 15:25:15.735808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:17.510 [2024-11-26 15:25:15.735818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.510 [2024-11-26 15:25:15.737969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.510 pt2 00:09:17.510 [2024-11-26 15:25:15.738049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.510 [2024-11-26 15:25:15.738121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:17.510 [2024-11-26 15:25:15.738165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.510 "name": "raid_bdev1", 00:09:17.510 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:17.510 "strip_size_kb": 0, 00:09:17.510 "state": "configuring", 00:09:17.510 "raid_level": "raid1", 00:09:17.510 "superblock": true, 00:09:17.510 "num_base_bdevs": 3, 00:09:17.510 "num_base_bdevs_discovered": 1, 00:09:17.510 "num_base_bdevs_operational": 2, 00:09:17.510 "base_bdevs_list": [ 00:09:17.510 { 00:09:17.510 "name": null, 00:09:17.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.510 "is_configured": false, 00:09:17.510 "data_offset": 2048, 00:09:17.510 "data_size": 63488 00:09:17.510 }, 00:09:17.510 { 00:09:17.510 "name": "pt2", 00:09:17.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.510 "is_configured": true, 00:09:17.510 "data_offset": 2048, 00:09:17.510 "data_size": 63488 00:09:17.510 }, 00:09:17.510 { 00:09:17.510 "name": null, 00:09:17.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.510 "is_configured": false, 00:09:17.510 "data_offset": 2048, 00:09:17.510 "data_size": 63488 00:09:17.510 } 00:09:17.510 ] 00:09:17.510 }' 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.510 15:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.771 [2024-11-26 15:25:16.203926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.771 [2024-11-26 15:25:16.204065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.771 [2024-11-26 15:25:16.204106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:17.771 [2024-11-26 15:25:16.204140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.771 [2024-11-26 15:25:16.204569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.771 [2024-11-26 15:25:16.204630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.771 [2024-11-26 15:25:16.204739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:17.771 [2024-11-26 15:25:16.204798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:17.771 [2024-11-26 15:25:16.204916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.771 [2024-11-26 15:25:16.204958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:17.771 [2024-11-26 15:25:16.205218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:17.771 [2024-11-26 15:25:16.205371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.771 [2024-11-26 15:25:16.205409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:17.771 [2024-11-26 15:25:16.205550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.771 pt3 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.771 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.031 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.031 "name": "raid_bdev1", 00:09:18.031 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:18.031 "strip_size_kb": 0, 00:09:18.031 "state": "online", 00:09:18.031 "raid_level": "raid1", 00:09:18.031 "superblock": true, 00:09:18.031 "num_base_bdevs": 3, 00:09:18.031 "num_base_bdevs_discovered": 2, 00:09:18.031 "num_base_bdevs_operational": 2, 00:09:18.031 "base_bdevs_list": [ 00:09:18.031 { 00:09:18.031 "name": null, 00:09:18.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.031 "is_configured": false, 00:09:18.031 "data_offset": 2048, 00:09:18.031 "data_size": 63488 00:09:18.031 }, 00:09:18.031 { 00:09:18.031 "name": "pt2", 00:09:18.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.031 "is_configured": true, 00:09:18.031 "data_offset": 2048, 00:09:18.031 "data_size": 63488 00:09:18.031 }, 00:09:18.031 { 00:09:18.031 "name": "pt3", 00:09:18.031 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.031 "is_configured": true, 00:09:18.031 "data_offset": 2048, 00:09:18.031 "data_size": 63488 00:09:18.031 } 00:09:18.031 ] 00:09:18.031 }' 00:09:18.031 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.031 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.291 [2024-11-26 15:25:16.651987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.291 [2024-11-26 15:25:16.652061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.291 [2024-11-26 15:25:16.652153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.291 [2024-11-26 15:25:16.652240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.291 [2024-11-26 15:25:16.652295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.291 [2024-11-26 15:25:16.719985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.291 [2024-11-26 15:25:16.720084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.291 [2024-11-26 15:25:16.720118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:18.291 [2024-11-26 15:25:16.720143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.291 [2024-11-26 15:25:16.722294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.291 [2024-11-26 15:25:16.722375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.291 [2024-11-26 15:25:16.722465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:18.291 [2024-11-26 15:25:16.722521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.291 [2024-11-26 15:25:16.722654] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:18.291 [2024-11-26 15:25:16.722715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.291 [2024-11-26 15:25:16.722752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:09:18.291 [2024-11-26 15:25:16.722827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.291 pt1 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.291 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.291 "name": "raid_bdev1", 00:09:18.291 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:18.291 "strip_size_kb": 0, 00:09:18.291 "state": "configuring", 00:09:18.291 "raid_level": "raid1", 00:09:18.291 "superblock": true, 00:09:18.291 "num_base_bdevs": 3, 00:09:18.291 "num_base_bdevs_discovered": 1, 00:09:18.292 "num_base_bdevs_operational": 2, 00:09:18.292 "base_bdevs_list": [ 00:09:18.292 { 00:09:18.292 "name": null, 00:09:18.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.292 "is_configured": false, 00:09:18.292 "data_offset": 2048, 00:09:18.292 "data_size": 63488 00:09:18.292 }, 00:09:18.292 { 00:09:18.292 "name": "pt2", 00:09:18.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.292 "is_configured": true, 00:09:18.292 "data_offset": 2048, 00:09:18.292 "data_size": 63488 00:09:18.292 }, 00:09:18.292 { 00:09:18.292 "name": null, 00:09:18.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.292 "is_configured": false, 00:09:18.292 "data_offset": 2048, 00:09:18.292 "data_size": 63488 00:09:18.292 } 00:09:18.292 ] 00:09:18.292 }' 00:09:18.292 15:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.292 15:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.861 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.862 [2024-11-26 15:25:17.144120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:18.862 [2024-11-26 15:25:17.144225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.862 [2024-11-26 15:25:17.144261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:18.862 [2024-11-26 15:25:17.144289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.862 [2024-11-26 15:25:17.144678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.862 [2024-11-26 15:25:17.144731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:18.862 [2024-11-26 15:25:17.144827] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:18.862 [2024-11-26 15:25:17.144895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:18.862 [2024-11-26 15:25:17.145018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:09:18.862 [2024-11-26 15:25:17.145053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:18.862 [2024-11-26 15:25:17.145306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:09:18.862 [2024-11-26 15:25:17.145462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:09:18.862 [2024-11-26 15:25:17.145504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:09:18.862 [2024-11-26 15:25:17.145632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.862 pt3 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.862 "name": "raid_bdev1", 00:09:18.862 "uuid": "0f307ea9-9065-40f0-b91c-345cf31b99be", 00:09:18.862 "strip_size_kb": 0, 00:09:18.862 "state": "online", 00:09:18.862 "raid_level": "raid1", 00:09:18.862 "superblock": true, 00:09:18.862 "num_base_bdevs": 3, 00:09:18.862 "num_base_bdevs_discovered": 2, 00:09:18.862 "num_base_bdevs_operational": 2, 00:09:18.862 "base_bdevs_list": [ 00:09:18.862 { 00:09:18.862 "name": null, 00:09:18.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.862 "is_configured": false, 00:09:18.862 "data_offset": 2048, 00:09:18.862 "data_size": 63488 00:09:18.862 }, 00:09:18.862 { 00:09:18.862 "name": "pt2", 00:09:18.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.862 "is_configured": true, 00:09:18.862 "data_offset": 2048, 00:09:18.862 "data_size": 63488 00:09:18.862 }, 00:09:18.862 { 00:09:18.862 "name": "pt3", 00:09:18.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.862 "is_configured": true, 00:09:18.862 "data_offset": 2048, 00:09:18.862 "data_size": 63488 00:09:18.862 } 00:09:18.862 ] 00:09:18.862 }' 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.862 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.121 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:19.121 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.121 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.121 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:19.121 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:19.381 [2024-11-26 15:25:17.632511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0f307ea9-9065-40f0-b91c-345cf31b99be '!=' 0f307ea9-9065-40f0-b91c-345cf31b99be ']' 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81243 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81243 ']' 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81243 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81243 00:09:19.381 killing process with pid 81243 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81243' 00:09:19.381 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81243 00:09:19.381 [2024-11-26 15:25:17.712337] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.382 [2024-11-26 15:25:17.712416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.382 [2024-11-26 15:25:17.712473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.382 [2024-11-26 15:25:17.712486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:09:19.382 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81243 00:09:19.382 [2024-11-26 15:25:17.746322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.641 15:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:19.641 00:09:19.641 real 0m6.340s 00:09:19.641 user 0m10.643s 00:09:19.641 sys 0m1.308s 00:09:19.641 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.641 15:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.641 ************************************ 00:09:19.641 END TEST raid_superblock_test 00:09:19.641 ************************************ 00:09:19.641 15:25:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:19.641 15:25:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:19.641 15:25:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.641 15:25:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.641 ************************************ 00:09:19.641 START TEST raid_read_error_test 00:09:19.641 ************************************ 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.huKbrbeqPY 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81672 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81672 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 81672 ']' 00:09:19.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.641 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.901 [2024-11-26 15:25:18.134434] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:19.901 [2024-11-26 15:25:18.134563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81672 ] 00:09:19.901 [2024-11-26 15:25:18.267923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:19.901 [2024-11-26 15:25:18.305807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.901 [2024-11-26 15:25:18.330244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.901 [2024-11-26 15:25:18.372442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.901 [2024-11-26 15:25:18.372559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.842 BaseBdev1_malloc 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.842 true 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.842 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.842 [2024-11-26 15:25:18.988075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:20.842 [2024-11-26 15:25:18.988131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.842 [2024-11-26 15:25:18.988154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:20.842 [2024-11-26 15:25:18.988170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.842 [2024-11-26 15:25:18.990270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.842 [2024-11-26 15:25:18.990362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:20.842 BaseBdev1 00:09:20.843 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.843 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.843 15:25:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:20.843 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.843 15:25:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.843 BaseBdev2_malloc 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.843 true 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.843 [2024-11-26 15:25:19.028723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:20.843 [2024-11-26 15:25:19.028835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.843 [2024-11-26 15:25:19.028854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:20.843 [2024-11-26 15:25:19.028864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.843 [2024-11-26 15:25:19.030946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.843 [2024-11-26 15:25:19.030984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:20.843 BaseBdev2 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.843 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 BaseBdev3_malloc 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 true 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 [2024-11-26 15:25:19.069259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:20.844 [2024-11-26 15:25:19.069306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.844 [2024-11-26 15:25:19.069322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:20.844 [2024-11-26 15:25:19.069349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.844 [2024-11-26 15:25:19.071496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.844 [2024-11-26 15:25:19.071584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:20.844 BaseBdev3 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.844 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.844 [2024-11-26 15:25:19.081306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.844 [2024-11-26 15:25:19.083085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.844 [2024-11-26 15:25:19.083150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.844 [2024-11-26 15:25:19.083398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.844 [2024-11-26 15:25:19.083443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:20.844 [2024-11-26 15:25:19.083708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:20.845 [2024-11-26 15:25:19.083900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.845 [2024-11-26 15:25:19.083945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:20.845 [2024-11-26 15:25:19.084120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.845 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.846 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.847 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.847 "name": "raid_bdev1", 00:09:20.847 "uuid": "57b9fe9c-034f-4eb2-8c59-582b4a8f4c5e", 00:09:20.847 "strip_size_kb": 0, 00:09:20.847 "state": "online", 00:09:20.847 "raid_level": "raid1", 00:09:20.847 "superblock": true, 00:09:20.847 "num_base_bdevs": 3, 00:09:20.847 "num_base_bdevs_discovered": 3, 00:09:20.847 "num_base_bdevs_operational": 3, 00:09:20.847 "base_bdevs_list": [ 00:09:20.847 { 00:09:20.847 "name": "BaseBdev1", 00:09:20.847 "uuid": "ebedbb08-417d-5e8f-b546-689a2c3de74c", 00:09:20.847 "is_configured": true, 00:09:20.847 "data_offset": 2048, 00:09:20.847 "data_size": 63488 00:09:20.847 }, 00:09:20.847 { 00:09:20.847 "name": "BaseBdev2", 00:09:20.847 "uuid": "9c844468-1813-5f60-87ba-66383c03051e", 00:09:20.847 "is_configured": true, 00:09:20.848 "data_offset": 2048, 00:09:20.848 "data_size": 63488 00:09:20.848 }, 00:09:20.848 { 00:09:20.848 "name": "BaseBdev3", 00:09:20.848 "uuid": "dae740e7-7585-593c-b1f0-36781fa9df00", 00:09:20.848 "is_configured": true, 00:09:20.848 "data_offset": 2048, 00:09:20.848 "data_size": 63488 00:09:20.848 } 00:09:20.848 ] 00:09:20.848 }' 00:09:20.848 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.849 15:25:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.113 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:21.113 15:25:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:21.373 [2024-11-26 15:25:19.641848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.312 "name": "raid_bdev1", 00:09:22.312 "uuid": "57b9fe9c-034f-4eb2-8c59-582b4a8f4c5e", 00:09:22.312 "strip_size_kb": 0, 00:09:22.312 "state": "online", 00:09:22.312 "raid_level": "raid1", 00:09:22.312 "superblock": true, 00:09:22.312 "num_base_bdevs": 3, 00:09:22.312 "num_base_bdevs_discovered": 3, 00:09:22.312 "num_base_bdevs_operational": 3, 00:09:22.312 "base_bdevs_list": [ 00:09:22.312 { 00:09:22.312 "name": "BaseBdev1", 00:09:22.312 "uuid": "ebedbb08-417d-5e8f-b546-689a2c3de74c", 00:09:22.312 "is_configured": true, 00:09:22.312 "data_offset": 2048, 00:09:22.312 "data_size": 63488 00:09:22.312 }, 00:09:22.312 { 00:09:22.312 "name": "BaseBdev2", 00:09:22.312 "uuid": "9c844468-1813-5f60-87ba-66383c03051e", 00:09:22.312 "is_configured": true, 00:09:22.312 "data_offset": 2048, 00:09:22.312 "data_size": 63488 00:09:22.312 }, 00:09:22.312 { 00:09:22.312 "name": "BaseBdev3", 00:09:22.312 "uuid": "dae740e7-7585-593c-b1f0-36781fa9df00", 00:09:22.312 "is_configured": true, 00:09:22.312 "data_offset": 2048, 00:09:22.312 "data_size": 63488 00:09:22.312 } 00:09:22.312 ] 00:09:22.312 }' 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.312 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.572 [2024-11-26 15:25:20.983289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.572 [2024-11-26 15:25:20.983364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.572 [2024-11-26 15:25:20.985963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.572 [2024-11-26 15:25:20.986066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.572 [2024-11-26 15:25:20.986203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.572 [2024-11-26 15:25:20.986251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:22.572 { 00:09:22.572 "results": [ 00:09:22.572 { 00:09:22.572 "job": "raid_bdev1", 00:09:22.572 "core_mask": "0x1", 00:09:22.572 "workload": "randrw", 00:09:22.572 "percentage": 50, 00:09:22.572 "status": "finished", 00:09:22.572 "queue_depth": 1, 00:09:22.572 "io_size": 131072, 00:09:22.572 "runtime": 1.339638, 00:09:22.572 "iops": 14797.281056524225, 00:09:22.572 "mibps": 1849.6601320655282, 00:09:22.572 "io_failed": 0, 00:09:22.572 "io_timeout": 0, 00:09:22.572 "avg_latency_us": 65.11887398631731, 00:09:22.572 "min_latency_us": 21.42072692408263, 00:09:22.572 "max_latency_us": 1378.0667654493159 00:09:22.572 } 00:09:22.572 ], 00:09:22.572 "core_count": 1 00:09:22.572 } 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81672 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 81672 ']' 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 81672 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.572 15:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81672 00:09:22.572 killing process with pid 81672 00:09:22.572 15:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.572 15:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.572 15:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81672' 00:09:22.572 15:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 81672 00:09:22.572 15:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 81672 00:09:22.572 [2024-11-26 15:25:21.029621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.832 [2024-11-26 15:25:21.055675] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.huKbrbeqPY 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:22.832 ************************************ 00:09:22.832 END TEST raid_read_error_test 00:09:22.832 ************************************ 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:22.832 00:09:22.832 real 0m3.235s 00:09:22.832 user 0m4.102s 00:09:22.832 sys 0m0.533s 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.832 15:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.091 15:25:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:23.091 15:25:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:23.091 15:25:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.091 15:25:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.091 ************************************ 00:09:23.091 START TEST raid_write_error_test 00:09:23.091 ************************************ 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GRoJUfHWVF 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81801 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81801 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 81801 ']' 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.091 15:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.091 [2024-11-26 15:25:21.443266] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:23.091 [2024-11-26 15:25:21.443404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81801 ] 00:09:23.351 [2024-11-26 15:25:21.580036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:23.351 [2024-11-26 15:25:21.615540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.351 [2024-11-26 15:25:21.640494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.351 [2024-11-26 15:25:21.683732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.351 [2024-11-26 15:25:21.683767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 BaseBdev1_malloc 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 true 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 [2024-11-26 15:25:22.299604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:23.921 [2024-11-26 15:25:22.299704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.921 [2024-11-26 15:25:22.299740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:23.921 [2024-11-26 15:25:22.299754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.921 [2024-11-26 15:25:22.301822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.921 [2024-11-26 15:25:22.301862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:23.921 BaseBdev1 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 BaseBdev2_malloc 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 true 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.921 [2024-11-26 15:25:22.340201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:23.921 [2024-11-26 15:25:22.340246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.921 [2024-11-26 15:25:22.340261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:23.921 [2024-11-26 15:25:22.340270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.921 [2024-11-26 15:25:22.342286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.921 [2024-11-26 15:25:22.342322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:23.921 BaseBdev2 00:09:23.921 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.922 BaseBdev3_malloc 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.922 true 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.922 [2024-11-26 15:25:22.380687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:23.922 [2024-11-26 15:25:22.380732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.922 [2024-11-26 15:25:22.380752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:23.922 [2024-11-26 15:25:22.380763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.922 [2024-11-26 15:25:22.382797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.922 [2024-11-26 15:25:22.382836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:23.922 BaseBdev3 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.922 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.922 [2024-11-26 15:25:22.392733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.181 [2024-11-26 15:25:22.394577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.181 [2024-11-26 15:25:22.394645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.181 [2024-11-26 15:25:22.394818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.181 [2024-11-26 15:25:22.394829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:24.181 [2024-11-26 15:25:22.395058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:24.181 [2024-11-26 15:25:22.395240] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.181 [2024-11-26 15:25:22.395254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:24.181 [2024-11-26 15:25:22.395374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.181 "name": "raid_bdev1", 00:09:24.181 "uuid": "06a6bf52-de66-41f9-90ca-34a37ba11aa8", 00:09:24.181 "strip_size_kb": 0, 00:09:24.181 "state": "online", 00:09:24.181 "raid_level": "raid1", 00:09:24.181 "superblock": true, 00:09:24.181 "num_base_bdevs": 3, 00:09:24.181 "num_base_bdevs_discovered": 3, 00:09:24.181 "num_base_bdevs_operational": 3, 00:09:24.181 "base_bdevs_list": [ 00:09:24.181 { 00:09:24.181 "name": "BaseBdev1", 00:09:24.181 "uuid": "d43b343f-d1b8-59eb-865b-47312897001a", 00:09:24.181 "is_configured": true, 00:09:24.181 "data_offset": 2048, 00:09:24.181 "data_size": 63488 00:09:24.181 }, 00:09:24.181 { 00:09:24.181 "name": "BaseBdev2", 00:09:24.181 "uuid": "1e5a1cb8-2525-5361-a173-015bcba950a8", 00:09:24.181 "is_configured": true, 00:09:24.181 "data_offset": 2048, 00:09:24.181 "data_size": 63488 00:09:24.181 }, 00:09:24.181 { 00:09:24.181 "name": "BaseBdev3", 00:09:24.181 "uuid": "4a0e87b1-b273-5991-ae95-353d1595de4e", 00:09:24.181 "is_configured": true, 00:09:24.181 "data_offset": 2048, 00:09:24.181 "data_size": 63488 00:09:24.181 } 00:09:24.181 ] 00:09:24.181 }' 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.181 15:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.441 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.441 15:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.441 [2024-11-26 15:25:22.909279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.381 [2024-11-26 15:25:23.833869] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:25.381 [2024-11-26 15:25:23.833998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.381 [2024-11-26 15:25:23.834263] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006b10 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.381 15:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.641 15:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.641 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.641 "name": "raid_bdev1", 00:09:25.641 "uuid": "06a6bf52-de66-41f9-90ca-34a37ba11aa8", 00:09:25.641 "strip_size_kb": 0, 00:09:25.641 "state": "online", 00:09:25.641 "raid_level": "raid1", 00:09:25.641 "superblock": true, 00:09:25.641 "num_base_bdevs": 3, 00:09:25.641 "num_base_bdevs_discovered": 2, 00:09:25.641 "num_base_bdevs_operational": 2, 00:09:25.641 "base_bdevs_list": [ 00:09:25.641 { 00:09:25.641 "name": null, 00:09:25.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.641 "is_configured": false, 00:09:25.641 "data_offset": 0, 00:09:25.641 "data_size": 63488 00:09:25.641 }, 00:09:25.641 { 00:09:25.641 "name": "BaseBdev2", 00:09:25.641 "uuid": "1e5a1cb8-2525-5361-a173-015bcba950a8", 00:09:25.641 "is_configured": true, 00:09:25.641 "data_offset": 2048, 00:09:25.641 "data_size": 63488 00:09:25.641 }, 00:09:25.641 { 00:09:25.641 "name": "BaseBdev3", 00:09:25.641 "uuid": "4a0e87b1-b273-5991-ae95-353d1595de4e", 00:09:25.641 "is_configured": true, 00:09:25.641 "data_offset": 2048, 00:09:25.641 "data_size": 63488 00:09:25.641 } 00:09:25.641 ] 00:09:25.641 }' 00:09:25.641 15:25:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.641 15:25:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.901 [2024-11-26 15:25:24.285035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.901 [2024-11-26 15:25:24.285121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.901 [2024-11-26 15:25:24.287598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.901 [2024-11-26 15:25:24.287694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.901 [2024-11-26 15:25:24.287797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.901 [2024-11-26 15:25:24.287856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:25.901 { 00:09:25.901 "results": [ 00:09:25.901 { 00:09:25.901 "job": "raid_bdev1", 00:09:25.901 "core_mask": "0x1", 00:09:25.901 "workload": "randrw", 00:09:25.901 "percentage": 50, 00:09:25.901 "status": "finished", 00:09:25.901 "queue_depth": 1, 00:09:25.901 "io_size": 131072, 00:09:25.901 "runtime": 1.37389, 00:09:25.901 "iops": 16548.631986549142, 00:09:25.901 "mibps": 2068.578998318643, 00:09:25.901 "io_failed": 0, 00:09:25.901 "io_timeout": 0, 00:09:25.901 "avg_latency_us": 57.97646378766584, 00:09:25.901 "min_latency_us": 21.86699206833435, 00:09:25.901 "max_latency_us": 1370.9265231412883 00:09:25.901 } 00:09:25.901 ], 00:09:25.901 "core_count": 1 00:09:25.901 } 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81801 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 81801 ']' 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 81801 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81801 00:09:25.901 killing process with pid 81801 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81801' 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 81801 00:09:25.901 [2024-11-26 15:25:24.335647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.901 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 81801 00:09:25.901 [2024-11-26 15:25:24.361115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GRoJUfHWVF 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:26.162 ************************************ 00:09:26.162 END TEST raid_write_error_test 00:09:26.162 ************************************ 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:26.162 00:09:26.162 real 0m3.237s 00:09:26.162 user 0m4.099s 00:09:26.162 sys 0m0.518s 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.162 15:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.421 15:25:24 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:26.421 15:25:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:26.421 15:25:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:26.421 15:25:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.421 15:25:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.421 15:25:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.421 ************************************ 00:09:26.421 START TEST raid_state_function_test 00:09:26.421 ************************************ 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.421 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81934 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81934' 00:09:26.422 Process raid pid: 81934 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81934 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81934 ']' 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.422 15:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.422 [2024-11-26 15:25:24.746226] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:26.422 [2024-11-26 15:25:24.746442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.422 [2024-11-26 15:25:24.882314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:26.681 [2024-11-26 15:25:24.921415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.681 [2024-11-26 15:25:24.946827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.681 [2024-11-26 15:25:24.989396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.681 [2024-11-26 15:25:24.989428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.251 [2024-11-26 15:25:25.572017] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.251 [2024-11-26 15:25:25.572139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.251 [2024-11-26 15:25:25.572155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.251 [2024-11-26 15:25:25.572163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.251 [2024-11-26 15:25:25.572174] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.251 [2024-11-26 15:25:25.572191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.251 [2024-11-26 15:25:25.572199] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:27.251 [2024-11-26 15:25:25.572205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.251 "name": "Existed_Raid", 00:09:27.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.251 "strip_size_kb": 64, 00:09:27.251 "state": "configuring", 00:09:27.251 "raid_level": "raid0", 00:09:27.251 "superblock": false, 00:09:27.251 "num_base_bdevs": 4, 00:09:27.251 "num_base_bdevs_discovered": 0, 00:09:27.251 "num_base_bdevs_operational": 4, 00:09:27.251 "base_bdevs_list": [ 00:09:27.251 { 00:09:27.251 "name": "BaseBdev1", 00:09:27.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.251 "is_configured": false, 00:09:27.251 "data_offset": 0, 00:09:27.251 "data_size": 0 00:09:27.251 }, 00:09:27.251 { 00:09:27.251 "name": "BaseBdev2", 00:09:27.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.251 "is_configured": false, 00:09:27.251 "data_offset": 0, 00:09:27.251 "data_size": 0 00:09:27.251 }, 00:09:27.251 { 00:09:27.251 "name": "BaseBdev3", 00:09:27.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.251 "is_configured": false, 00:09:27.251 "data_offset": 0, 00:09:27.251 "data_size": 0 00:09:27.251 }, 00:09:27.251 { 00:09:27.251 "name": "BaseBdev4", 00:09:27.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.251 "is_configured": false, 00:09:27.251 "data_offset": 0, 00:09:27.251 "data_size": 0 00:09:27.251 } 00:09:27.251 ] 00:09:27.251 }' 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.251 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.511 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.511 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.511 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.511 [2024-11-26 15:25:25.976032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.511 [2024-11-26 15:25:25.976120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:27.511 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.511 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:27.511 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.511 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.511 [2024-11-26 15:25:25.984048] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.511 [2024-11-26 15:25:25.984140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.511 [2024-11-26 15:25:25.984170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.511 [2024-11-26 15:25:25.984201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.511 [2024-11-26 15:25:25.984222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.511 [2024-11-26 15:25:25.984242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.511 [2024-11-26 15:25:25.984262] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:27.511 [2024-11-26 15:25:25.984286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:27.772 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.772 15:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.772 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.772 15:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.772 [2024-11-26 15:25:26.000990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.772 BaseBdev1 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.772 [ 00:09:27.772 { 00:09:27.772 "name": "BaseBdev1", 00:09:27.772 "aliases": [ 00:09:27.772 "ea58fe11-f176-46b6-a3f1-59406f3e5070" 00:09:27.772 ], 00:09:27.772 "product_name": "Malloc disk", 00:09:27.772 "block_size": 512, 00:09:27.772 "num_blocks": 65536, 00:09:27.772 "uuid": "ea58fe11-f176-46b6-a3f1-59406f3e5070", 00:09:27.772 "assigned_rate_limits": { 00:09:27.772 "rw_ios_per_sec": 0, 00:09:27.772 "rw_mbytes_per_sec": 0, 00:09:27.772 "r_mbytes_per_sec": 0, 00:09:27.772 "w_mbytes_per_sec": 0 00:09:27.772 }, 00:09:27.772 "claimed": true, 00:09:27.772 "claim_type": "exclusive_write", 00:09:27.772 "zoned": false, 00:09:27.772 "supported_io_types": { 00:09:27.772 "read": true, 00:09:27.772 "write": true, 00:09:27.772 "unmap": true, 00:09:27.772 "flush": true, 00:09:27.772 "reset": true, 00:09:27.772 "nvme_admin": false, 00:09:27.772 "nvme_io": false, 00:09:27.772 "nvme_io_md": false, 00:09:27.772 "write_zeroes": true, 00:09:27.772 "zcopy": true, 00:09:27.772 "get_zone_info": false, 00:09:27.772 "zone_management": false, 00:09:27.772 "zone_append": false, 00:09:27.772 "compare": false, 00:09:27.772 "compare_and_write": false, 00:09:27.772 "abort": true, 00:09:27.772 "seek_hole": false, 00:09:27.772 "seek_data": false, 00:09:27.772 "copy": true, 00:09:27.772 "nvme_iov_md": false 00:09:27.772 }, 00:09:27.772 "memory_domains": [ 00:09:27.772 { 00:09:27.772 "dma_device_id": "system", 00:09:27.772 "dma_device_type": 1 00:09:27.772 }, 00:09:27.772 { 00:09:27.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.772 "dma_device_type": 2 00:09:27.772 } 00:09:27.772 ], 00:09:27.772 "driver_specific": {} 00:09:27.772 } 00:09:27.772 ] 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.772 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.772 "name": "Existed_Raid", 00:09:27.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.772 "strip_size_kb": 64, 00:09:27.772 "state": "configuring", 00:09:27.772 "raid_level": "raid0", 00:09:27.772 "superblock": false, 00:09:27.772 "num_base_bdevs": 4, 00:09:27.772 "num_base_bdevs_discovered": 1, 00:09:27.772 "num_base_bdevs_operational": 4, 00:09:27.773 "base_bdevs_list": [ 00:09:27.773 { 00:09:27.773 "name": "BaseBdev1", 00:09:27.773 "uuid": "ea58fe11-f176-46b6-a3f1-59406f3e5070", 00:09:27.773 "is_configured": true, 00:09:27.773 "data_offset": 0, 00:09:27.773 "data_size": 65536 00:09:27.773 }, 00:09:27.773 { 00:09:27.773 "name": "BaseBdev2", 00:09:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.773 "is_configured": false, 00:09:27.773 "data_offset": 0, 00:09:27.773 "data_size": 0 00:09:27.773 }, 00:09:27.773 { 00:09:27.773 "name": "BaseBdev3", 00:09:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.773 "is_configured": false, 00:09:27.773 "data_offset": 0, 00:09:27.773 "data_size": 0 00:09:27.773 }, 00:09:27.773 { 00:09:27.773 "name": "BaseBdev4", 00:09:27.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.773 "is_configured": false, 00:09:27.773 "data_offset": 0, 00:09:27.773 "data_size": 0 00:09:27.773 } 00:09:27.773 ] 00:09:27.773 }' 00:09:27.773 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.773 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.032 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.032 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.032 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.032 [2024-11-26 15:25:26.493156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.032 [2024-11-26 15:25:26.493224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.032 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.032 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:28.032 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.032 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.032 [2024-11-26 15:25:26.505217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.294 [2024-11-26 15:25:26.507139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.294 [2024-11-26 15:25:26.507173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.294 [2024-11-26 15:25:26.507197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.294 [2024-11-26 15:25:26.507204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.294 [2024-11-26 15:25:26.507212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:28.294 [2024-11-26 15:25:26.507218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.294 "name": "Existed_Raid", 00:09:28.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.294 "strip_size_kb": 64, 00:09:28.294 "state": "configuring", 00:09:28.294 "raid_level": "raid0", 00:09:28.294 "superblock": false, 00:09:28.294 "num_base_bdevs": 4, 00:09:28.294 "num_base_bdevs_discovered": 1, 00:09:28.294 "num_base_bdevs_operational": 4, 00:09:28.294 "base_bdevs_list": [ 00:09:28.294 { 00:09:28.294 "name": "BaseBdev1", 00:09:28.294 "uuid": "ea58fe11-f176-46b6-a3f1-59406f3e5070", 00:09:28.294 "is_configured": true, 00:09:28.294 "data_offset": 0, 00:09:28.294 "data_size": 65536 00:09:28.294 }, 00:09:28.294 { 00:09:28.294 "name": "BaseBdev2", 00:09:28.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.294 "is_configured": false, 00:09:28.294 "data_offset": 0, 00:09:28.294 "data_size": 0 00:09:28.294 }, 00:09:28.294 { 00:09:28.294 "name": "BaseBdev3", 00:09:28.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.294 "is_configured": false, 00:09:28.294 "data_offset": 0, 00:09:28.294 "data_size": 0 00:09:28.294 }, 00:09:28.294 { 00:09:28.294 "name": "BaseBdev4", 00:09:28.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.294 "is_configured": false, 00:09:28.294 "data_offset": 0, 00:09:28.294 "data_size": 0 00:09:28.294 } 00:09:28.294 ] 00:09:28.294 }' 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.294 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.554 [2024-11-26 15:25:26.900338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.554 BaseBdev2 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.554 [ 00:09:28.554 { 00:09:28.554 "name": "BaseBdev2", 00:09:28.554 "aliases": [ 00:09:28.554 "2d235835-2325-4d8c-9e00-ca0ba227cdc1" 00:09:28.554 ], 00:09:28.554 "product_name": "Malloc disk", 00:09:28.554 "block_size": 512, 00:09:28.554 "num_blocks": 65536, 00:09:28.554 "uuid": "2d235835-2325-4d8c-9e00-ca0ba227cdc1", 00:09:28.554 "assigned_rate_limits": { 00:09:28.554 "rw_ios_per_sec": 0, 00:09:28.554 "rw_mbytes_per_sec": 0, 00:09:28.554 "r_mbytes_per_sec": 0, 00:09:28.554 "w_mbytes_per_sec": 0 00:09:28.554 }, 00:09:28.554 "claimed": true, 00:09:28.554 "claim_type": "exclusive_write", 00:09:28.554 "zoned": false, 00:09:28.554 "supported_io_types": { 00:09:28.554 "read": true, 00:09:28.554 "write": true, 00:09:28.554 "unmap": true, 00:09:28.554 "flush": true, 00:09:28.554 "reset": true, 00:09:28.554 "nvme_admin": false, 00:09:28.554 "nvme_io": false, 00:09:28.554 "nvme_io_md": false, 00:09:28.554 "write_zeroes": true, 00:09:28.554 "zcopy": true, 00:09:28.554 "get_zone_info": false, 00:09:28.554 "zone_management": false, 00:09:28.554 "zone_append": false, 00:09:28.554 "compare": false, 00:09:28.554 "compare_and_write": false, 00:09:28.554 "abort": true, 00:09:28.554 "seek_hole": false, 00:09:28.554 "seek_data": false, 00:09:28.554 "copy": true, 00:09:28.554 "nvme_iov_md": false 00:09:28.554 }, 00:09:28.554 "memory_domains": [ 00:09:28.554 { 00:09:28.554 "dma_device_id": "system", 00:09:28.554 "dma_device_type": 1 00:09:28.554 }, 00:09:28.554 { 00:09:28.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.554 "dma_device_type": 2 00:09:28.554 } 00:09:28.554 ], 00:09:28.554 "driver_specific": {} 00:09:28.554 } 00:09:28.554 ] 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.554 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.555 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.555 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.555 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.555 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.555 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.555 "name": "Existed_Raid", 00:09:28.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.555 "strip_size_kb": 64, 00:09:28.555 "state": "configuring", 00:09:28.555 "raid_level": "raid0", 00:09:28.555 "superblock": false, 00:09:28.555 "num_base_bdevs": 4, 00:09:28.555 "num_base_bdevs_discovered": 2, 00:09:28.555 "num_base_bdevs_operational": 4, 00:09:28.555 "base_bdevs_list": [ 00:09:28.555 { 00:09:28.555 "name": "BaseBdev1", 00:09:28.555 "uuid": "ea58fe11-f176-46b6-a3f1-59406f3e5070", 00:09:28.555 "is_configured": true, 00:09:28.555 "data_offset": 0, 00:09:28.555 "data_size": 65536 00:09:28.555 }, 00:09:28.555 { 00:09:28.555 "name": "BaseBdev2", 00:09:28.555 "uuid": "2d235835-2325-4d8c-9e00-ca0ba227cdc1", 00:09:28.555 "is_configured": true, 00:09:28.555 "data_offset": 0, 00:09:28.555 "data_size": 65536 00:09:28.555 }, 00:09:28.555 { 00:09:28.555 "name": "BaseBdev3", 00:09:28.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.555 "is_configured": false, 00:09:28.555 "data_offset": 0, 00:09:28.555 "data_size": 0 00:09:28.555 }, 00:09:28.555 { 00:09:28.555 "name": "BaseBdev4", 00:09:28.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.555 "is_configured": false, 00:09:28.555 "data_offset": 0, 00:09:28.555 "data_size": 0 00:09:28.555 } 00:09:28.555 ] 00:09:28.555 }' 00:09:28.555 15:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.555 15:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 [2024-11-26 15:25:27.323830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.124 BaseBdev3 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.124 [ 00:09:29.124 { 00:09:29.124 "name": "BaseBdev3", 00:09:29.124 "aliases": [ 00:09:29.124 "90445ac9-fbdd-4270-a592-90cf3643cae9" 00:09:29.124 ], 00:09:29.124 "product_name": "Malloc disk", 00:09:29.124 "block_size": 512, 00:09:29.124 "num_blocks": 65536, 00:09:29.124 "uuid": "90445ac9-fbdd-4270-a592-90cf3643cae9", 00:09:29.124 "assigned_rate_limits": { 00:09:29.124 "rw_ios_per_sec": 0, 00:09:29.124 "rw_mbytes_per_sec": 0, 00:09:29.124 "r_mbytes_per_sec": 0, 00:09:29.124 "w_mbytes_per_sec": 0 00:09:29.124 }, 00:09:29.124 "claimed": true, 00:09:29.124 "claim_type": "exclusive_write", 00:09:29.124 "zoned": false, 00:09:29.124 "supported_io_types": { 00:09:29.124 "read": true, 00:09:29.124 "write": true, 00:09:29.124 "unmap": true, 00:09:29.124 "flush": true, 00:09:29.124 "reset": true, 00:09:29.124 "nvme_admin": false, 00:09:29.124 "nvme_io": false, 00:09:29.124 "nvme_io_md": false, 00:09:29.124 "write_zeroes": true, 00:09:29.124 "zcopy": true, 00:09:29.124 "get_zone_info": false, 00:09:29.124 "zone_management": false, 00:09:29.124 "zone_append": false, 00:09:29.124 "compare": false, 00:09:29.124 "compare_and_write": false, 00:09:29.124 "abort": true, 00:09:29.124 "seek_hole": false, 00:09:29.124 "seek_data": false, 00:09:29.124 "copy": true, 00:09:29.124 "nvme_iov_md": false 00:09:29.124 }, 00:09:29.124 "memory_domains": [ 00:09:29.124 { 00:09:29.124 "dma_device_id": "system", 00:09:29.124 "dma_device_type": 1 00:09:29.124 }, 00:09:29.124 { 00:09:29.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.124 "dma_device_type": 2 00:09:29.124 } 00:09:29.124 ], 00:09:29.124 "driver_specific": {} 00:09:29.124 } 00:09:29.124 ] 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.124 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.125 "name": "Existed_Raid", 00:09:29.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.125 "strip_size_kb": 64, 00:09:29.125 "state": "configuring", 00:09:29.125 "raid_level": "raid0", 00:09:29.125 "superblock": false, 00:09:29.125 "num_base_bdevs": 4, 00:09:29.125 "num_base_bdevs_discovered": 3, 00:09:29.125 "num_base_bdevs_operational": 4, 00:09:29.125 "base_bdevs_list": [ 00:09:29.125 { 00:09:29.125 "name": "BaseBdev1", 00:09:29.125 "uuid": "ea58fe11-f176-46b6-a3f1-59406f3e5070", 00:09:29.125 "is_configured": true, 00:09:29.125 "data_offset": 0, 00:09:29.125 "data_size": 65536 00:09:29.125 }, 00:09:29.125 { 00:09:29.125 "name": "BaseBdev2", 00:09:29.125 "uuid": "2d235835-2325-4d8c-9e00-ca0ba227cdc1", 00:09:29.125 "is_configured": true, 00:09:29.125 "data_offset": 0, 00:09:29.125 "data_size": 65536 00:09:29.125 }, 00:09:29.125 { 00:09:29.125 "name": "BaseBdev3", 00:09:29.125 "uuid": "90445ac9-fbdd-4270-a592-90cf3643cae9", 00:09:29.125 "is_configured": true, 00:09:29.125 "data_offset": 0, 00:09:29.125 "data_size": 65536 00:09:29.125 }, 00:09:29.125 { 00:09:29.125 "name": "BaseBdev4", 00:09:29.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.125 "is_configured": false, 00:09:29.125 "data_offset": 0, 00:09:29.125 "data_size": 0 00:09:29.125 } 00:09:29.125 ] 00:09:29.125 }' 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.125 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.383 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:29.383 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.383 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.383 [2024-11-26 15:25:27.850965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:29.383 [2024-11-26 15:25:27.851012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:29.383 [2024-11-26 15:25:27.851033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:29.383 [2024-11-26 15:25:27.851321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:29.383 [2024-11-26 15:25:27.851468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:29.383 [2024-11-26 15:25:27.851490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:29.383 BaseBdev4 00:09:29.383 [2024-11-26 15:25:27.851696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.383 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.383 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.384 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.643 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.644 [ 00:09:29.644 { 00:09:29.644 "name": "BaseBdev4", 00:09:29.644 "aliases": [ 00:09:29.644 "e4066ba2-a5f3-4766-bf6f-528972490082" 00:09:29.644 ], 00:09:29.644 "product_name": "Malloc disk", 00:09:29.644 "block_size": 512, 00:09:29.644 "num_blocks": 65536, 00:09:29.644 "uuid": "e4066ba2-a5f3-4766-bf6f-528972490082", 00:09:29.644 "assigned_rate_limits": { 00:09:29.644 "rw_ios_per_sec": 0, 00:09:29.644 "rw_mbytes_per_sec": 0, 00:09:29.644 "r_mbytes_per_sec": 0, 00:09:29.644 "w_mbytes_per_sec": 0 00:09:29.644 }, 00:09:29.644 "claimed": true, 00:09:29.644 "claim_type": "exclusive_write", 00:09:29.644 "zoned": false, 00:09:29.644 "supported_io_types": { 00:09:29.644 "read": true, 00:09:29.644 "write": true, 00:09:29.644 "unmap": true, 00:09:29.644 "flush": true, 00:09:29.644 "reset": true, 00:09:29.644 "nvme_admin": false, 00:09:29.644 "nvme_io": false, 00:09:29.644 "nvme_io_md": false, 00:09:29.644 "write_zeroes": true, 00:09:29.644 "zcopy": true, 00:09:29.644 "get_zone_info": false, 00:09:29.644 "zone_management": false, 00:09:29.644 "zone_append": false, 00:09:29.644 "compare": false, 00:09:29.644 "compare_and_write": false, 00:09:29.644 "abort": true, 00:09:29.644 "seek_hole": false, 00:09:29.644 "seek_data": false, 00:09:29.644 "copy": true, 00:09:29.644 "nvme_iov_md": false 00:09:29.644 }, 00:09:29.644 "memory_domains": [ 00:09:29.644 { 00:09:29.644 "dma_device_id": "system", 00:09:29.644 "dma_device_type": 1 00:09:29.644 }, 00:09:29.644 { 00:09:29.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.644 "dma_device_type": 2 00:09:29.644 } 00:09:29.644 ], 00:09:29.644 "driver_specific": {} 00:09:29.644 } 00:09:29.644 ] 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.644 "name": "Existed_Raid", 00:09:29.644 "uuid": "98f1609b-ac00-471f-8656-896c50257973", 00:09:29.644 "strip_size_kb": 64, 00:09:29.644 "state": "online", 00:09:29.644 "raid_level": "raid0", 00:09:29.644 "superblock": false, 00:09:29.644 "num_base_bdevs": 4, 00:09:29.644 "num_base_bdevs_discovered": 4, 00:09:29.644 "num_base_bdevs_operational": 4, 00:09:29.644 "base_bdevs_list": [ 00:09:29.644 { 00:09:29.644 "name": "BaseBdev1", 00:09:29.644 "uuid": "ea58fe11-f176-46b6-a3f1-59406f3e5070", 00:09:29.644 "is_configured": true, 00:09:29.644 "data_offset": 0, 00:09:29.644 "data_size": 65536 00:09:29.644 }, 00:09:29.644 { 00:09:29.644 "name": "BaseBdev2", 00:09:29.644 "uuid": "2d235835-2325-4d8c-9e00-ca0ba227cdc1", 00:09:29.644 "is_configured": true, 00:09:29.644 "data_offset": 0, 00:09:29.644 "data_size": 65536 00:09:29.644 }, 00:09:29.644 { 00:09:29.644 "name": "BaseBdev3", 00:09:29.644 "uuid": "90445ac9-fbdd-4270-a592-90cf3643cae9", 00:09:29.644 "is_configured": true, 00:09:29.644 "data_offset": 0, 00:09:29.644 "data_size": 65536 00:09:29.644 }, 00:09:29.644 { 00:09:29.644 "name": "BaseBdev4", 00:09:29.644 "uuid": "e4066ba2-a5f3-4766-bf6f-528972490082", 00:09:29.644 "is_configured": true, 00:09:29.644 "data_offset": 0, 00:09:29.644 "data_size": 65536 00:09:29.644 } 00:09:29.644 ] 00:09:29.644 }' 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.644 15:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.904 [2024-11-26 15:25:28.343462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.904 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.904 "name": "Existed_Raid", 00:09:29.904 "aliases": [ 00:09:29.904 "98f1609b-ac00-471f-8656-896c50257973" 00:09:29.904 ], 00:09:29.904 "product_name": "Raid Volume", 00:09:29.904 "block_size": 512, 00:09:29.904 "num_blocks": 262144, 00:09:29.904 "uuid": "98f1609b-ac00-471f-8656-896c50257973", 00:09:29.904 "assigned_rate_limits": { 00:09:29.904 "rw_ios_per_sec": 0, 00:09:29.904 "rw_mbytes_per_sec": 0, 00:09:29.904 "r_mbytes_per_sec": 0, 00:09:29.904 "w_mbytes_per_sec": 0 00:09:29.904 }, 00:09:29.904 "claimed": false, 00:09:29.904 "zoned": false, 00:09:29.904 "supported_io_types": { 00:09:29.904 "read": true, 00:09:29.904 "write": true, 00:09:29.904 "unmap": true, 00:09:29.904 "flush": true, 00:09:29.904 "reset": true, 00:09:29.904 "nvme_admin": false, 00:09:29.904 "nvme_io": false, 00:09:29.904 "nvme_io_md": false, 00:09:29.905 "write_zeroes": true, 00:09:29.905 "zcopy": false, 00:09:29.905 "get_zone_info": false, 00:09:29.905 "zone_management": false, 00:09:29.905 "zone_append": false, 00:09:29.905 "compare": false, 00:09:29.905 "compare_and_write": false, 00:09:29.905 "abort": false, 00:09:29.905 "seek_hole": false, 00:09:29.905 "seek_data": false, 00:09:29.905 "copy": false, 00:09:29.905 "nvme_iov_md": false 00:09:29.905 }, 00:09:29.905 "memory_domains": [ 00:09:29.905 { 00:09:29.905 "dma_device_id": "system", 00:09:29.905 "dma_device_type": 1 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.905 "dma_device_type": 2 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "dma_device_id": "system", 00:09:29.905 "dma_device_type": 1 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.905 "dma_device_type": 2 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "dma_device_id": "system", 00:09:29.905 "dma_device_type": 1 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.905 "dma_device_type": 2 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "dma_device_id": "system", 00:09:29.905 "dma_device_type": 1 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.905 "dma_device_type": 2 00:09:29.905 } 00:09:29.905 ], 00:09:29.905 "driver_specific": { 00:09:29.905 "raid": { 00:09:29.905 "uuid": "98f1609b-ac00-471f-8656-896c50257973", 00:09:29.905 "strip_size_kb": 64, 00:09:29.905 "state": "online", 00:09:29.905 "raid_level": "raid0", 00:09:29.905 "superblock": false, 00:09:29.905 "num_base_bdevs": 4, 00:09:29.905 "num_base_bdevs_discovered": 4, 00:09:29.905 "num_base_bdevs_operational": 4, 00:09:29.905 "base_bdevs_list": [ 00:09:29.905 { 00:09:29.905 "name": "BaseBdev1", 00:09:29.905 "uuid": "ea58fe11-f176-46b6-a3f1-59406f3e5070", 00:09:29.905 "is_configured": true, 00:09:29.905 "data_offset": 0, 00:09:29.905 "data_size": 65536 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "name": "BaseBdev2", 00:09:29.905 "uuid": "2d235835-2325-4d8c-9e00-ca0ba227cdc1", 00:09:29.905 "is_configured": true, 00:09:29.905 "data_offset": 0, 00:09:29.905 "data_size": 65536 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "name": "BaseBdev3", 00:09:29.905 "uuid": "90445ac9-fbdd-4270-a592-90cf3643cae9", 00:09:29.905 "is_configured": true, 00:09:29.905 "data_offset": 0, 00:09:29.905 "data_size": 65536 00:09:29.905 }, 00:09:29.905 { 00:09:29.905 "name": "BaseBdev4", 00:09:29.905 "uuid": "e4066ba2-a5f3-4766-bf6f-528972490082", 00:09:29.905 "is_configured": true, 00:09:29.905 "data_offset": 0, 00:09:29.905 "data_size": 65536 00:09:29.905 } 00:09:29.905 ] 00:09:29.905 } 00:09:29.905 } 00:09:29.905 }' 00:09:29.905 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:30.165 BaseBdev2 00:09:30.165 BaseBdev3 00:09:30.165 BaseBdev4' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.165 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.426 [2024-11-26 15:25:28.647284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.426 [2024-11-26 15:25:28.647319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.426 [2024-11-26 15:25:28.647369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.426 "name": "Existed_Raid", 00:09:30.426 "uuid": "98f1609b-ac00-471f-8656-896c50257973", 00:09:30.426 "strip_size_kb": 64, 00:09:30.426 "state": "offline", 00:09:30.426 "raid_level": "raid0", 00:09:30.426 "superblock": false, 00:09:30.426 "num_base_bdevs": 4, 00:09:30.426 "num_base_bdevs_discovered": 3, 00:09:30.426 "num_base_bdevs_operational": 3, 00:09:30.426 "base_bdevs_list": [ 00:09:30.426 { 00:09:30.426 "name": null, 00:09:30.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.426 "is_configured": false, 00:09:30.426 "data_offset": 0, 00:09:30.426 "data_size": 65536 00:09:30.426 }, 00:09:30.426 { 00:09:30.426 "name": "BaseBdev2", 00:09:30.426 "uuid": "2d235835-2325-4d8c-9e00-ca0ba227cdc1", 00:09:30.426 "is_configured": true, 00:09:30.426 "data_offset": 0, 00:09:30.426 "data_size": 65536 00:09:30.426 }, 00:09:30.426 { 00:09:30.426 "name": "BaseBdev3", 00:09:30.426 "uuid": "90445ac9-fbdd-4270-a592-90cf3643cae9", 00:09:30.426 "is_configured": true, 00:09:30.426 "data_offset": 0, 00:09:30.426 "data_size": 65536 00:09:30.426 }, 00:09:30.426 { 00:09:30.426 "name": "BaseBdev4", 00:09:30.426 "uuid": "e4066ba2-a5f3-4766-bf6f-528972490082", 00:09:30.426 "is_configured": true, 00:09:30.426 "data_offset": 0, 00:09:30.426 "data_size": 65536 00:09:30.426 } 00:09:30.426 ] 00:09:30.426 }' 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.426 15:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.686 [2024-11-26 15:25:29.130746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.686 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 [2024-11-26 15:25:29.177951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 [2024-11-26 15:25:29.245120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:30.948 [2024-11-26 15:25:29.245189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 BaseBdev2 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.948 [ 00:09:30.948 { 00:09:30.948 "name": "BaseBdev2", 00:09:30.948 "aliases": [ 00:09:30.948 "ce7e65c2-180d-409c-8af1-b232018c1348" 00:09:30.948 ], 00:09:30.948 "product_name": "Malloc disk", 00:09:30.948 "block_size": 512, 00:09:30.948 "num_blocks": 65536, 00:09:30.948 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:30.948 "assigned_rate_limits": { 00:09:30.948 "rw_ios_per_sec": 0, 00:09:30.948 "rw_mbytes_per_sec": 0, 00:09:30.948 "r_mbytes_per_sec": 0, 00:09:30.948 "w_mbytes_per_sec": 0 00:09:30.948 }, 00:09:30.948 "claimed": false, 00:09:30.948 "zoned": false, 00:09:30.948 "supported_io_types": { 00:09:30.948 "read": true, 00:09:30.948 "write": true, 00:09:30.948 "unmap": true, 00:09:30.948 "flush": true, 00:09:30.948 "reset": true, 00:09:30.948 "nvme_admin": false, 00:09:30.948 "nvme_io": false, 00:09:30.948 "nvme_io_md": false, 00:09:30.948 "write_zeroes": true, 00:09:30.948 "zcopy": true, 00:09:30.948 "get_zone_info": false, 00:09:30.948 "zone_management": false, 00:09:30.948 "zone_append": false, 00:09:30.948 "compare": false, 00:09:30.948 "compare_and_write": false, 00:09:30.948 "abort": true, 00:09:30.948 "seek_hole": false, 00:09:30.948 "seek_data": false, 00:09:30.948 "copy": true, 00:09:30.948 "nvme_iov_md": false 00:09:30.948 }, 00:09:30.948 "memory_domains": [ 00:09:30.948 { 00:09:30.948 "dma_device_id": "system", 00:09:30.948 "dma_device_type": 1 00:09:30.948 }, 00:09:30.948 { 00:09:30.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.948 "dma_device_type": 2 00:09:30.948 } 00:09:30.948 ], 00:09:30.948 "driver_specific": {} 00:09:30.948 } 00:09:30.948 ] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.948 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.949 BaseBdev3 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.949 [ 00:09:30.949 { 00:09:30.949 "name": "BaseBdev3", 00:09:30.949 "aliases": [ 00:09:30.949 "eea8f8ae-61b1-4553-9d95-8eadb3199d15" 00:09:30.949 ], 00:09:30.949 "product_name": "Malloc disk", 00:09:30.949 "block_size": 512, 00:09:30.949 "num_blocks": 65536, 00:09:30.949 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:30.949 "assigned_rate_limits": { 00:09:30.949 "rw_ios_per_sec": 0, 00:09:30.949 "rw_mbytes_per_sec": 0, 00:09:30.949 "r_mbytes_per_sec": 0, 00:09:30.949 "w_mbytes_per_sec": 0 00:09:30.949 }, 00:09:30.949 "claimed": false, 00:09:30.949 "zoned": false, 00:09:30.949 "supported_io_types": { 00:09:30.949 "read": true, 00:09:30.949 "write": true, 00:09:30.949 "unmap": true, 00:09:30.949 "flush": true, 00:09:30.949 "reset": true, 00:09:30.949 "nvme_admin": false, 00:09:30.949 "nvme_io": false, 00:09:30.949 "nvme_io_md": false, 00:09:30.949 "write_zeroes": true, 00:09:30.949 "zcopy": true, 00:09:30.949 "get_zone_info": false, 00:09:30.949 "zone_management": false, 00:09:30.949 "zone_append": false, 00:09:30.949 "compare": false, 00:09:30.949 "compare_and_write": false, 00:09:30.949 "abort": true, 00:09:30.949 "seek_hole": false, 00:09:30.949 "seek_data": false, 00:09:30.949 "copy": true, 00:09:30.949 "nvme_iov_md": false 00:09:30.949 }, 00:09:30.949 "memory_domains": [ 00:09:30.949 { 00:09:30.949 "dma_device_id": "system", 00:09:30.949 "dma_device_type": 1 00:09:30.949 }, 00:09:30.949 { 00:09:30.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.949 "dma_device_type": 2 00:09:30.949 } 00:09:30.949 ], 00:09:30.949 "driver_specific": {} 00:09:30.949 } 00:09:30.949 ] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.949 BaseBdev4 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.949 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.210 [ 00:09:31.210 { 00:09:31.210 "name": "BaseBdev4", 00:09:31.210 "aliases": [ 00:09:31.210 "83397f0b-f159-4961-9e4a-47ae39d1fc4b" 00:09:31.210 ], 00:09:31.210 "product_name": "Malloc disk", 00:09:31.210 "block_size": 512, 00:09:31.210 "num_blocks": 65536, 00:09:31.210 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:31.210 "assigned_rate_limits": { 00:09:31.210 "rw_ios_per_sec": 0, 00:09:31.210 "rw_mbytes_per_sec": 0, 00:09:31.210 "r_mbytes_per_sec": 0, 00:09:31.210 "w_mbytes_per_sec": 0 00:09:31.210 }, 00:09:31.210 "claimed": false, 00:09:31.210 "zoned": false, 00:09:31.210 "supported_io_types": { 00:09:31.210 "read": true, 00:09:31.210 "write": true, 00:09:31.210 "unmap": true, 00:09:31.210 "flush": true, 00:09:31.210 "reset": true, 00:09:31.210 "nvme_admin": false, 00:09:31.210 "nvme_io": false, 00:09:31.210 "nvme_io_md": false, 00:09:31.210 "write_zeroes": true, 00:09:31.210 "zcopy": true, 00:09:31.210 "get_zone_info": false, 00:09:31.210 "zone_management": false, 00:09:31.210 "zone_append": false, 00:09:31.210 "compare": false, 00:09:31.210 "compare_and_write": false, 00:09:31.210 "abort": true, 00:09:31.210 "seek_hole": false, 00:09:31.210 "seek_data": false, 00:09:31.210 "copy": true, 00:09:31.210 "nvme_iov_md": false 00:09:31.210 }, 00:09:31.210 "memory_domains": [ 00:09:31.210 { 00:09:31.210 "dma_device_id": "system", 00:09:31.210 "dma_device_type": 1 00:09:31.210 }, 00:09:31.210 { 00:09:31.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.210 "dma_device_type": 2 00:09:31.210 } 00:09:31.210 ], 00:09:31.210 "driver_specific": {} 00:09:31.210 } 00:09:31.210 ] 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.210 [2024-11-26 15:25:29.445356] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.210 [2024-11-26 15:25:29.445401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.210 [2024-11-26 15:25:29.445419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.210 [2024-11-26 15:25:29.447191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.210 [2024-11-26 15:25:29.447238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.210 "name": "Existed_Raid", 00:09:31.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.210 "strip_size_kb": 64, 00:09:31.210 "state": "configuring", 00:09:31.210 "raid_level": "raid0", 00:09:31.210 "superblock": false, 00:09:31.210 "num_base_bdevs": 4, 00:09:31.210 "num_base_bdevs_discovered": 3, 00:09:31.210 "num_base_bdevs_operational": 4, 00:09:31.210 "base_bdevs_list": [ 00:09:31.210 { 00:09:31.210 "name": "BaseBdev1", 00:09:31.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.210 "is_configured": false, 00:09:31.210 "data_offset": 0, 00:09:31.210 "data_size": 0 00:09:31.210 }, 00:09:31.210 { 00:09:31.210 "name": "BaseBdev2", 00:09:31.210 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:31.210 "is_configured": true, 00:09:31.210 "data_offset": 0, 00:09:31.210 "data_size": 65536 00:09:31.210 }, 00:09:31.210 { 00:09:31.210 "name": "BaseBdev3", 00:09:31.210 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:31.210 "is_configured": true, 00:09:31.210 "data_offset": 0, 00:09:31.210 "data_size": 65536 00:09:31.210 }, 00:09:31.210 { 00:09:31.210 "name": "BaseBdev4", 00:09:31.210 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:31.210 "is_configured": true, 00:09:31.210 "data_offset": 0, 00:09:31.210 "data_size": 65536 00:09:31.210 } 00:09:31.210 ] 00:09:31.210 }' 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.210 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.470 [2024-11-26 15:25:29.929466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.470 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.730 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.730 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.730 "name": "Existed_Raid", 00:09:31.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.730 "strip_size_kb": 64, 00:09:31.730 "state": "configuring", 00:09:31.730 "raid_level": "raid0", 00:09:31.730 "superblock": false, 00:09:31.730 "num_base_bdevs": 4, 00:09:31.730 "num_base_bdevs_discovered": 2, 00:09:31.730 "num_base_bdevs_operational": 4, 00:09:31.730 "base_bdevs_list": [ 00:09:31.730 { 00:09:31.730 "name": "BaseBdev1", 00:09:31.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.730 "is_configured": false, 00:09:31.730 "data_offset": 0, 00:09:31.730 "data_size": 0 00:09:31.730 }, 00:09:31.730 { 00:09:31.730 "name": null, 00:09:31.730 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:31.730 "is_configured": false, 00:09:31.730 "data_offset": 0, 00:09:31.730 "data_size": 65536 00:09:31.730 }, 00:09:31.730 { 00:09:31.730 "name": "BaseBdev3", 00:09:31.730 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:31.730 "is_configured": true, 00:09:31.730 "data_offset": 0, 00:09:31.730 "data_size": 65536 00:09:31.730 }, 00:09:31.730 { 00:09:31.730 "name": "BaseBdev4", 00:09:31.730 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:31.730 "is_configured": true, 00:09:31.730 "data_offset": 0, 00:09:31.730 "data_size": 65536 00:09:31.730 } 00:09:31.730 ] 00:09:31.730 }' 00:09:31.730 15:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.730 15:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.991 [2024-11-26 15:25:30.408661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.991 BaseBdev1 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.991 [ 00:09:31.991 { 00:09:31.991 "name": "BaseBdev1", 00:09:31.991 "aliases": [ 00:09:31.991 "16429750-8543-4606-9553-b548c15d47ad" 00:09:31.991 ], 00:09:31.991 "product_name": "Malloc disk", 00:09:31.991 "block_size": 512, 00:09:31.991 "num_blocks": 65536, 00:09:31.991 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:31.991 "assigned_rate_limits": { 00:09:31.991 "rw_ios_per_sec": 0, 00:09:31.991 "rw_mbytes_per_sec": 0, 00:09:31.991 "r_mbytes_per_sec": 0, 00:09:31.991 "w_mbytes_per_sec": 0 00:09:31.991 }, 00:09:31.991 "claimed": true, 00:09:31.991 "claim_type": "exclusive_write", 00:09:31.991 "zoned": false, 00:09:31.991 "supported_io_types": { 00:09:31.991 "read": true, 00:09:31.991 "write": true, 00:09:31.991 "unmap": true, 00:09:31.991 "flush": true, 00:09:31.991 "reset": true, 00:09:31.991 "nvme_admin": false, 00:09:31.991 "nvme_io": false, 00:09:31.991 "nvme_io_md": false, 00:09:31.991 "write_zeroes": true, 00:09:31.991 "zcopy": true, 00:09:31.991 "get_zone_info": false, 00:09:31.991 "zone_management": false, 00:09:31.991 "zone_append": false, 00:09:31.991 "compare": false, 00:09:31.991 "compare_and_write": false, 00:09:31.991 "abort": true, 00:09:31.991 "seek_hole": false, 00:09:31.991 "seek_data": false, 00:09:31.991 "copy": true, 00:09:31.991 "nvme_iov_md": false 00:09:31.991 }, 00:09:31.991 "memory_domains": [ 00:09:31.991 { 00:09:31.991 "dma_device_id": "system", 00:09:31.991 "dma_device_type": 1 00:09:31.991 }, 00:09:31.991 { 00:09:31.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.991 "dma_device_type": 2 00:09:31.991 } 00:09:31.991 ], 00:09:31.991 "driver_specific": {} 00:09:31.991 } 00:09:31.991 ] 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.991 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.252 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.252 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.252 "name": "Existed_Raid", 00:09:32.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.252 "strip_size_kb": 64, 00:09:32.252 "state": "configuring", 00:09:32.252 "raid_level": "raid0", 00:09:32.252 "superblock": false, 00:09:32.252 "num_base_bdevs": 4, 00:09:32.252 "num_base_bdevs_discovered": 3, 00:09:32.252 "num_base_bdevs_operational": 4, 00:09:32.252 "base_bdevs_list": [ 00:09:32.252 { 00:09:32.252 "name": "BaseBdev1", 00:09:32.252 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:32.252 "is_configured": true, 00:09:32.252 "data_offset": 0, 00:09:32.252 "data_size": 65536 00:09:32.252 }, 00:09:32.252 { 00:09:32.252 "name": null, 00:09:32.252 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:32.252 "is_configured": false, 00:09:32.252 "data_offset": 0, 00:09:32.252 "data_size": 65536 00:09:32.252 }, 00:09:32.252 { 00:09:32.252 "name": "BaseBdev3", 00:09:32.252 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:32.252 "is_configured": true, 00:09:32.252 "data_offset": 0, 00:09:32.252 "data_size": 65536 00:09:32.252 }, 00:09:32.252 { 00:09:32.252 "name": "BaseBdev4", 00:09:32.252 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:32.252 "is_configured": true, 00:09:32.252 "data_offset": 0, 00:09:32.252 "data_size": 65536 00:09:32.252 } 00:09:32.252 ] 00:09:32.252 }' 00:09:32.252 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.252 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.513 [2024-11-26 15:25:30.908885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.513 "name": "Existed_Raid", 00:09:32.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.513 "strip_size_kb": 64, 00:09:32.513 "state": "configuring", 00:09:32.513 "raid_level": "raid0", 00:09:32.513 "superblock": false, 00:09:32.513 "num_base_bdevs": 4, 00:09:32.513 "num_base_bdevs_discovered": 2, 00:09:32.513 "num_base_bdevs_operational": 4, 00:09:32.513 "base_bdevs_list": [ 00:09:32.513 { 00:09:32.513 "name": "BaseBdev1", 00:09:32.513 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:32.513 "is_configured": true, 00:09:32.513 "data_offset": 0, 00:09:32.513 "data_size": 65536 00:09:32.513 }, 00:09:32.513 { 00:09:32.513 "name": null, 00:09:32.513 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:32.513 "is_configured": false, 00:09:32.513 "data_offset": 0, 00:09:32.513 "data_size": 65536 00:09:32.513 }, 00:09:32.513 { 00:09:32.513 "name": null, 00:09:32.513 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:32.513 "is_configured": false, 00:09:32.513 "data_offset": 0, 00:09:32.513 "data_size": 65536 00:09:32.513 }, 00:09:32.513 { 00:09:32.513 "name": "BaseBdev4", 00:09:32.513 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:32.513 "is_configured": true, 00:09:32.513 "data_offset": 0, 00:09:32.513 "data_size": 65536 00:09:32.513 } 00:09:32.513 ] 00:09:32.513 }' 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.513 15:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.084 [2024-11-26 15:25:31.381048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.084 "name": "Existed_Raid", 00:09:33.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.084 "strip_size_kb": 64, 00:09:33.084 "state": "configuring", 00:09:33.084 "raid_level": "raid0", 00:09:33.084 "superblock": false, 00:09:33.084 "num_base_bdevs": 4, 00:09:33.084 "num_base_bdevs_discovered": 3, 00:09:33.084 "num_base_bdevs_operational": 4, 00:09:33.084 "base_bdevs_list": [ 00:09:33.084 { 00:09:33.084 "name": "BaseBdev1", 00:09:33.084 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:33.084 "is_configured": true, 00:09:33.084 "data_offset": 0, 00:09:33.084 "data_size": 65536 00:09:33.084 }, 00:09:33.084 { 00:09:33.084 "name": null, 00:09:33.084 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:33.084 "is_configured": false, 00:09:33.084 "data_offset": 0, 00:09:33.084 "data_size": 65536 00:09:33.084 }, 00:09:33.084 { 00:09:33.084 "name": "BaseBdev3", 00:09:33.084 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:33.084 "is_configured": true, 00:09:33.084 "data_offset": 0, 00:09:33.084 "data_size": 65536 00:09:33.084 }, 00:09:33.084 { 00:09:33.084 "name": "BaseBdev4", 00:09:33.084 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:33.084 "is_configured": true, 00:09:33.084 "data_offset": 0, 00:09:33.084 "data_size": 65536 00:09:33.084 } 00:09:33.084 ] 00:09:33.084 }' 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.084 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.345 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.345 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.345 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.345 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:33.345 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.604 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:33.604 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.604 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.604 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.604 [2024-11-26 15:25:31.833214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.604 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.605 "name": "Existed_Raid", 00:09:33.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.605 "strip_size_kb": 64, 00:09:33.605 "state": "configuring", 00:09:33.605 "raid_level": "raid0", 00:09:33.605 "superblock": false, 00:09:33.605 "num_base_bdevs": 4, 00:09:33.605 "num_base_bdevs_discovered": 2, 00:09:33.605 "num_base_bdevs_operational": 4, 00:09:33.605 "base_bdevs_list": [ 00:09:33.605 { 00:09:33.605 "name": null, 00:09:33.605 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:33.605 "is_configured": false, 00:09:33.605 "data_offset": 0, 00:09:33.605 "data_size": 65536 00:09:33.605 }, 00:09:33.605 { 00:09:33.605 "name": null, 00:09:33.605 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:33.605 "is_configured": false, 00:09:33.605 "data_offset": 0, 00:09:33.605 "data_size": 65536 00:09:33.605 }, 00:09:33.605 { 00:09:33.605 "name": "BaseBdev3", 00:09:33.605 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:33.605 "is_configured": true, 00:09:33.605 "data_offset": 0, 00:09:33.605 "data_size": 65536 00:09:33.605 }, 00:09:33.605 { 00:09:33.605 "name": "BaseBdev4", 00:09:33.605 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:33.605 "is_configured": true, 00:09:33.605 "data_offset": 0, 00:09:33.605 "data_size": 65536 00:09:33.605 } 00:09:33.605 ] 00:09:33.605 }' 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.605 15:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.865 [2024-11-26 15:25:32.327927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.865 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.125 "name": "Existed_Raid", 00:09:34.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.125 "strip_size_kb": 64, 00:09:34.125 "state": "configuring", 00:09:34.125 "raid_level": "raid0", 00:09:34.125 "superblock": false, 00:09:34.125 "num_base_bdevs": 4, 00:09:34.125 "num_base_bdevs_discovered": 3, 00:09:34.125 "num_base_bdevs_operational": 4, 00:09:34.125 "base_bdevs_list": [ 00:09:34.125 { 00:09:34.125 "name": null, 00:09:34.125 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:34.125 "is_configured": false, 00:09:34.125 "data_offset": 0, 00:09:34.125 "data_size": 65536 00:09:34.125 }, 00:09:34.125 { 00:09:34.125 "name": "BaseBdev2", 00:09:34.125 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:34.125 "is_configured": true, 00:09:34.125 "data_offset": 0, 00:09:34.125 "data_size": 65536 00:09:34.125 }, 00:09:34.125 { 00:09:34.125 "name": "BaseBdev3", 00:09:34.125 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:34.125 "is_configured": true, 00:09:34.125 "data_offset": 0, 00:09:34.125 "data_size": 65536 00:09:34.125 }, 00:09:34.125 { 00:09:34.125 "name": "BaseBdev4", 00:09:34.125 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:34.125 "is_configured": true, 00:09:34.125 "data_offset": 0, 00:09:34.125 "data_size": 65536 00:09:34.125 } 00:09:34.125 ] 00:09:34.125 }' 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.125 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 16429750-8543-4606-9553-b548c15d47ad 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.386 [2024-11-26 15:25:32.819247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:34.386 [2024-11-26 15:25:32.819289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.386 [2024-11-26 15:25:32.819300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:34.386 [2024-11-26 15:25:32.819549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:09:34.386 [2024-11-26 15:25:32.819671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.386 [2024-11-26 15:25:32.819686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.386 [2024-11-26 15:25:32.819858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.386 NewBaseBdev 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.386 [ 00:09:34.386 { 00:09:34.386 "name": "NewBaseBdev", 00:09:34.386 "aliases": [ 00:09:34.386 "16429750-8543-4606-9553-b548c15d47ad" 00:09:34.386 ], 00:09:34.386 "product_name": "Malloc disk", 00:09:34.386 "block_size": 512, 00:09:34.386 "num_blocks": 65536, 00:09:34.386 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:34.386 "assigned_rate_limits": { 00:09:34.386 "rw_ios_per_sec": 0, 00:09:34.386 "rw_mbytes_per_sec": 0, 00:09:34.386 "r_mbytes_per_sec": 0, 00:09:34.386 "w_mbytes_per_sec": 0 00:09:34.386 }, 00:09:34.386 "claimed": true, 00:09:34.386 "claim_type": "exclusive_write", 00:09:34.386 "zoned": false, 00:09:34.386 "supported_io_types": { 00:09:34.386 "read": true, 00:09:34.386 "write": true, 00:09:34.386 "unmap": true, 00:09:34.386 "flush": true, 00:09:34.386 "reset": true, 00:09:34.386 "nvme_admin": false, 00:09:34.386 "nvme_io": false, 00:09:34.386 "nvme_io_md": false, 00:09:34.386 "write_zeroes": true, 00:09:34.386 "zcopy": true, 00:09:34.386 "get_zone_info": false, 00:09:34.386 "zone_management": false, 00:09:34.386 "zone_append": false, 00:09:34.386 "compare": false, 00:09:34.386 "compare_and_write": false, 00:09:34.386 "abort": true, 00:09:34.386 "seek_hole": false, 00:09:34.386 "seek_data": false, 00:09:34.386 "copy": true, 00:09:34.386 "nvme_iov_md": false 00:09:34.386 }, 00:09:34.386 "memory_domains": [ 00:09:34.386 { 00:09:34.386 "dma_device_id": "system", 00:09:34.386 "dma_device_type": 1 00:09:34.386 }, 00:09:34.386 { 00:09:34.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.386 "dma_device_type": 2 00:09:34.386 } 00:09:34.386 ], 00:09:34.386 "driver_specific": {} 00:09:34.386 } 00:09:34.386 ] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.386 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.646 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.646 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.646 "name": "Existed_Raid", 00:09:34.646 "uuid": "243ba162-fbec-4998-9633-f6c6dcb7dcf5", 00:09:34.647 "strip_size_kb": 64, 00:09:34.647 "state": "online", 00:09:34.647 "raid_level": "raid0", 00:09:34.647 "superblock": false, 00:09:34.647 "num_base_bdevs": 4, 00:09:34.647 "num_base_bdevs_discovered": 4, 00:09:34.647 "num_base_bdevs_operational": 4, 00:09:34.647 "base_bdevs_list": [ 00:09:34.647 { 00:09:34.647 "name": "NewBaseBdev", 00:09:34.647 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:34.647 "is_configured": true, 00:09:34.647 "data_offset": 0, 00:09:34.647 "data_size": 65536 00:09:34.647 }, 00:09:34.647 { 00:09:34.647 "name": "BaseBdev2", 00:09:34.647 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:34.647 "is_configured": true, 00:09:34.647 "data_offset": 0, 00:09:34.647 "data_size": 65536 00:09:34.647 }, 00:09:34.647 { 00:09:34.647 "name": "BaseBdev3", 00:09:34.647 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:34.647 "is_configured": true, 00:09:34.647 "data_offset": 0, 00:09:34.647 "data_size": 65536 00:09:34.647 }, 00:09:34.647 { 00:09:34.647 "name": "BaseBdev4", 00:09:34.647 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:34.647 "is_configured": true, 00:09:34.647 "data_offset": 0, 00:09:34.647 "data_size": 65536 00:09:34.647 } 00:09:34.647 ] 00:09:34.647 }' 00:09:34.647 15:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.647 15:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.907 [2024-11-26 15:25:33.319744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.907 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.907 "name": "Existed_Raid", 00:09:34.907 "aliases": [ 00:09:34.907 "243ba162-fbec-4998-9633-f6c6dcb7dcf5" 00:09:34.907 ], 00:09:34.907 "product_name": "Raid Volume", 00:09:34.907 "block_size": 512, 00:09:34.907 "num_blocks": 262144, 00:09:34.907 "uuid": "243ba162-fbec-4998-9633-f6c6dcb7dcf5", 00:09:34.907 "assigned_rate_limits": { 00:09:34.907 "rw_ios_per_sec": 0, 00:09:34.907 "rw_mbytes_per_sec": 0, 00:09:34.907 "r_mbytes_per_sec": 0, 00:09:34.907 "w_mbytes_per_sec": 0 00:09:34.907 }, 00:09:34.907 "claimed": false, 00:09:34.907 "zoned": false, 00:09:34.907 "supported_io_types": { 00:09:34.907 "read": true, 00:09:34.907 "write": true, 00:09:34.907 "unmap": true, 00:09:34.907 "flush": true, 00:09:34.907 "reset": true, 00:09:34.907 "nvme_admin": false, 00:09:34.907 "nvme_io": false, 00:09:34.907 "nvme_io_md": false, 00:09:34.907 "write_zeroes": true, 00:09:34.907 "zcopy": false, 00:09:34.907 "get_zone_info": false, 00:09:34.907 "zone_management": false, 00:09:34.907 "zone_append": false, 00:09:34.907 "compare": false, 00:09:34.907 "compare_and_write": false, 00:09:34.907 "abort": false, 00:09:34.907 "seek_hole": false, 00:09:34.907 "seek_data": false, 00:09:34.907 "copy": false, 00:09:34.907 "nvme_iov_md": false 00:09:34.907 }, 00:09:34.907 "memory_domains": [ 00:09:34.907 { 00:09:34.907 "dma_device_id": "system", 00:09:34.907 "dma_device_type": 1 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.907 "dma_device_type": 2 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "dma_device_id": "system", 00:09:34.907 "dma_device_type": 1 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.907 "dma_device_type": 2 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "dma_device_id": "system", 00:09:34.907 "dma_device_type": 1 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.907 "dma_device_type": 2 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "dma_device_id": "system", 00:09:34.907 "dma_device_type": 1 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.907 "dma_device_type": 2 00:09:34.907 } 00:09:34.907 ], 00:09:34.907 "driver_specific": { 00:09:34.907 "raid": { 00:09:34.907 "uuid": "243ba162-fbec-4998-9633-f6c6dcb7dcf5", 00:09:34.907 "strip_size_kb": 64, 00:09:34.907 "state": "online", 00:09:34.907 "raid_level": "raid0", 00:09:34.907 "superblock": false, 00:09:34.907 "num_base_bdevs": 4, 00:09:34.907 "num_base_bdevs_discovered": 4, 00:09:34.907 "num_base_bdevs_operational": 4, 00:09:34.907 "base_bdevs_list": [ 00:09:34.907 { 00:09:34.907 "name": "NewBaseBdev", 00:09:34.907 "uuid": "16429750-8543-4606-9553-b548c15d47ad", 00:09:34.907 "is_configured": true, 00:09:34.907 "data_offset": 0, 00:09:34.907 "data_size": 65536 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "name": "BaseBdev2", 00:09:34.907 "uuid": "ce7e65c2-180d-409c-8af1-b232018c1348", 00:09:34.907 "is_configured": true, 00:09:34.907 "data_offset": 0, 00:09:34.907 "data_size": 65536 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "name": "BaseBdev3", 00:09:34.907 "uuid": "eea8f8ae-61b1-4553-9d95-8eadb3199d15", 00:09:34.907 "is_configured": true, 00:09:34.907 "data_offset": 0, 00:09:34.907 "data_size": 65536 00:09:34.907 }, 00:09:34.907 { 00:09:34.907 "name": "BaseBdev4", 00:09:34.907 "uuid": "83397f0b-f159-4961-9e4a-47ae39d1fc4b", 00:09:34.907 "is_configured": true, 00:09:34.907 "data_offset": 0, 00:09:34.907 "data_size": 65536 00:09:34.907 } 00:09:34.907 ] 00:09:34.907 } 00:09:34.907 } 00:09:34.907 }' 00:09:34.908 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.908 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:34.908 BaseBdev2 00:09:34.908 BaseBdev3 00:09:34.908 BaseBdev4' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.168 [2024-11-26 15:25:33.583479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.168 [2024-11-26 15:25:33.583507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.168 [2024-11-26 15:25:33.583575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.168 [2024-11-26 15:25:33.583636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.168 [2024-11-26 15:25:33.583652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81934 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81934 ']' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81934 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81934 00:09:35.168 killing process with pid 81934 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81934' 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 81934 00:09:35.168 [2024-11-26 15:25:33.629649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.168 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 81934 00:09:35.428 [2024-11-26 15:25:33.669514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.428 ************************************ 00:09:35.428 END TEST raid_state_function_test 00:09:35.428 ************************************ 00:09:35.428 15:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:35.428 00:09:35.428 real 0m9.223s 00:09:35.428 user 0m15.859s 00:09:35.428 sys 0m1.845s 00:09:35.428 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.428 15:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.688 15:25:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:35.688 15:25:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:35.688 15:25:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.688 15:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.689 ************************************ 00:09:35.689 START TEST raid_state_function_test_sb 00:09:35.689 ************************************ 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82583 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82583' 00:09:35.689 Process raid pid: 82583 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82583 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82583 ']' 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.689 15:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.689 [2024-11-26 15:25:34.029711] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:35.689 [2024-11-26 15:25:34.029831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.949 [2024-11-26 15:25:34.165370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:35.949 [2024-11-26 15:25:34.201633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.949 [2024-11-26 15:25:34.226806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.949 [2024-11-26 15:25:34.269924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.949 [2024-11-26 15:25:34.269979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.551 [2024-11-26 15:25:34.861078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.551 [2024-11-26 15:25:34.861128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.551 [2024-11-26 15:25:34.861146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.551 [2024-11-26 15:25:34.861155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.551 [2024-11-26 15:25:34.861165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.551 [2024-11-26 15:25:34.861171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.551 [2024-11-26 15:25:34.861189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:36.551 [2024-11-26 15:25:34.861197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.551 "name": "Existed_Raid", 00:09:36.551 "uuid": "948f0a80-6aa5-4e96-bd6f-b96f2b8af7d9", 00:09:36.551 "strip_size_kb": 64, 00:09:36.551 "state": "configuring", 00:09:36.551 "raid_level": "raid0", 00:09:36.551 "superblock": true, 00:09:36.551 "num_base_bdevs": 4, 00:09:36.551 "num_base_bdevs_discovered": 0, 00:09:36.551 "num_base_bdevs_operational": 4, 00:09:36.551 "base_bdevs_list": [ 00:09:36.551 { 00:09:36.551 "name": "BaseBdev1", 00:09:36.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.551 "is_configured": false, 00:09:36.551 "data_offset": 0, 00:09:36.551 "data_size": 0 00:09:36.551 }, 00:09:36.551 { 00:09:36.551 "name": "BaseBdev2", 00:09:36.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.551 "is_configured": false, 00:09:36.551 "data_offset": 0, 00:09:36.551 "data_size": 0 00:09:36.551 }, 00:09:36.551 { 00:09:36.551 "name": "BaseBdev3", 00:09:36.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.551 "is_configured": false, 00:09:36.551 "data_offset": 0, 00:09:36.551 "data_size": 0 00:09:36.551 }, 00:09:36.551 { 00:09:36.551 "name": "BaseBdev4", 00:09:36.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.551 "is_configured": false, 00:09:36.551 "data_offset": 0, 00:09:36.551 "data_size": 0 00:09:36.551 } 00:09:36.551 ] 00:09:36.551 }' 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.551 15:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.123 [2024-11-26 15:25:35.309083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.123 [2024-11-26 15:25:35.309120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.123 [2024-11-26 15:25:35.321129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.123 [2024-11-26 15:25:35.321168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.123 [2024-11-26 15:25:35.321188] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.123 [2024-11-26 15:25:35.321196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.123 [2024-11-26 15:25:35.321204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.123 [2024-11-26 15:25:35.321210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.123 [2024-11-26 15:25:35.321217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:37.123 [2024-11-26 15:25:35.321224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.123 [2024-11-26 15:25:35.342124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.123 BaseBdev1 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.123 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.124 [ 00:09:37.124 { 00:09:37.124 "name": "BaseBdev1", 00:09:37.124 "aliases": [ 00:09:37.124 "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5" 00:09:37.124 ], 00:09:37.124 "product_name": "Malloc disk", 00:09:37.124 "block_size": 512, 00:09:37.124 "num_blocks": 65536, 00:09:37.124 "uuid": "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5", 00:09:37.124 "assigned_rate_limits": { 00:09:37.124 "rw_ios_per_sec": 0, 00:09:37.124 "rw_mbytes_per_sec": 0, 00:09:37.124 "r_mbytes_per_sec": 0, 00:09:37.124 "w_mbytes_per_sec": 0 00:09:37.124 }, 00:09:37.124 "claimed": true, 00:09:37.124 "claim_type": "exclusive_write", 00:09:37.124 "zoned": false, 00:09:37.124 "supported_io_types": { 00:09:37.124 "read": true, 00:09:37.124 "write": true, 00:09:37.124 "unmap": true, 00:09:37.124 "flush": true, 00:09:37.124 "reset": true, 00:09:37.124 "nvme_admin": false, 00:09:37.124 "nvme_io": false, 00:09:37.124 "nvme_io_md": false, 00:09:37.124 "write_zeroes": true, 00:09:37.124 "zcopy": true, 00:09:37.124 "get_zone_info": false, 00:09:37.124 "zone_management": false, 00:09:37.124 "zone_append": false, 00:09:37.124 "compare": false, 00:09:37.124 "compare_and_write": false, 00:09:37.124 "abort": true, 00:09:37.124 "seek_hole": false, 00:09:37.124 "seek_data": false, 00:09:37.124 "copy": true, 00:09:37.124 "nvme_iov_md": false 00:09:37.124 }, 00:09:37.124 "memory_domains": [ 00:09:37.124 { 00:09:37.124 "dma_device_id": "system", 00:09:37.124 "dma_device_type": 1 00:09:37.124 }, 00:09:37.124 { 00:09:37.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.124 "dma_device_type": 2 00:09:37.124 } 00:09:37.124 ], 00:09:37.124 "driver_specific": {} 00:09:37.124 } 00:09:37.124 ] 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.124 "name": "Existed_Raid", 00:09:37.124 "uuid": "8107f3b7-bde3-4d3f-8b2b-461bf3b58c92", 00:09:37.124 "strip_size_kb": 64, 00:09:37.124 "state": "configuring", 00:09:37.124 "raid_level": "raid0", 00:09:37.124 "superblock": true, 00:09:37.124 "num_base_bdevs": 4, 00:09:37.124 "num_base_bdevs_discovered": 1, 00:09:37.124 "num_base_bdevs_operational": 4, 00:09:37.124 "base_bdevs_list": [ 00:09:37.124 { 00:09:37.124 "name": "BaseBdev1", 00:09:37.124 "uuid": "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5", 00:09:37.124 "is_configured": true, 00:09:37.124 "data_offset": 2048, 00:09:37.124 "data_size": 63488 00:09:37.124 }, 00:09:37.124 { 00:09:37.124 "name": "BaseBdev2", 00:09:37.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.124 "is_configured": false, 00:09:37.124 "data_offset": 0, 00:09:37.124 "data_size": 0 00:09:37.124 }, 00:09:37.124 { 00:09:37.124 "name": "BaseBdev3", 00:09:37.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.124 "is_configured": false, 00:09:37.124 "data_offset": 0, 00:09:37.124 "data_size": 0 00:09:37.124 }, 00:09:37.124 { 00:09:37.124 "name": "BaseBdev4", 00:09:37.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.124 "is_configured": false, 00:09:37.124 "data_offset": 0, 00:09:37.124 "data_size": 0 00:09:37.124 } 00:09:37.124 ] 00:09:37.124 }' 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.124 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.384 [2024-11-26 15:25:35.818309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.384 [2024-11-26 15:25:35.818362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.384 [2024-11-26 15:25:35.830407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.384 [2024-11-26 15:25:35.832204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.384 [2024-11-26 15:25:35.832240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.384 [2024-11-26 15:25:35.832266] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:37.384 [2024-11-26 15:25:35.832273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:37.384 [2024-11-26 15:25:35.832280] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:37.384 [2024-11-26 15:25:35.832286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.384 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.385 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.645 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.645 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.645 "name": "Existed_Raid", 00:09:37.645 "uuid": "76457b4b-6f21-45c3-bed2-aa5cd5fafb47", 00:09:37.645 "strip_size_kb": 64, 00:09:37.645 "state": "configuring", 00:09:37.645 "raid_level": "raid0", 00:09:37.645 "superblock": true, 00:09:37.645 "num_base_bdevs": 4, 00:09:37.645 "num_base_bdevs_discovered": 1, 00:09:37.645 "num_base_bdevs_operational": 4, 00:09:37.645 "base_bdevs_list": [ 00:09:37.645 { 00:09:37.645 "name": "BaseBdev1", 00:09:37.645 "uuid": "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5", 00:09:37.645 "is_configured": true, 00:09:37.645 "data_offset": 2048, 00:09:37.645 "data_size": 63488 00:09:37.645 }, 00:09:37.645 { 00:09:37.645 "name": "BaseBdev2", 00:09:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.645 "is_configured": false, 00:09:37.645 "data_offset": 0, 00:09:37.645 "data_size": 0 00:09:37.645 }, 00:09:37.645 { 00:09:37.645 "name": "BaseBdev3", 00:09:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.645 "is_configured": false, 00:09:37.645 "data_offset": 0, 00:09:37.645 "data_size": 0 00:09:37.645 }, 00:09:37.645 { 00:09:37.645 "name": "BaseBdev4", 00:09:37.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.645 "is_configured": false, 00:09:37.645 "data_offset": 0, 00:09:37.645 "data_size": 0 00:09:37.645 } 00:09:37.645 ] 00:09:37.645 }' 00:09:37.645 15:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.645 15:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.906 [2024-11-26 15:25:36.229462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.906 BaseBdev2 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.906 [ 00:09:37.906 { 00:09:37.906 "name": "BaseBdev2", 00:09:37.906 "aliases": [ 00:09:37.906 "d684000f-dbad-4d7b-b1d9-88583e78f1ee" 00:09:37.906 ], 00:09:37.906 "product_name": "Malloc disk", 00:09:37.906 "block_size": 512, 00:09:37.906 "num_blocks": 65536, 00:09:37.906 "uuid": "d684000f-dbad-4d7b-b1d9-88583e78f1ee", 00:09:37.906 "assigned_rate_limits": { 00:09:37.906 "rw_ios_per_sec": 0, 00:09:37.906 "rw_mbytes_per_sec": 0, 00:09:37.906 "r_mbytes_per_sec": 0, 00:09:37.906 "w_mbytes_per_sec": 0 00:09:37.906 }, 00:09:37.906 "claimed": true, 00:09:37.906 "claim_type": "exclusive_write", 00:09:37.906 "zoned": false, 00:09:37.906 "supported_io_types": { 00:09:37.906 "read": true, 00:09:37.906 "write": true, 00:09:37.906 "unmap": true, 00:09:37.906 "flush": true, 00:09:37.906 "reset": true, 00:09:37.906 "nvme_admin": false, 00:09:37.906 "nvme_io": false, 00:09:37.906 "nvme_io_md": false, 00:09:37.906 "write_zeroes": true, 00:09:37.906 "zcopy": true, 00:09:37.906 "get_zone_info": false, 00:09:37.906 "zone_management": false, 00:09:37.906 "zone_append": false, 00:09:37.906 "compare": false, 00:09:37.906 "compare_and_write": false, 00:09:37.906 "abort": true, 00:09:37.906 "seek_hole": false, 00:09:37.906 "seek_data": false, 00:09:37.906 "copy": true, 00:09:37.906 "nvme_iov_md": false 00:09:37.906 }, 00:09:37.906 "memory_domains": [ 00:09:37.906 { 00:09:37.906 "dma_device_id": "system", 00:09:37.906 "dma_device_type": 1 00:09:37.906 }, 00:09:37.906 { 00:09:37.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.906 "dma_device_type": 2 00:09:37.906 } 00:09:37.906 ], 00:09:37.906 "driver_specific": {} 00:09:37.906 } 00:09:37.906 ] 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.906 "name": "Existed_Raid", 00:09:37.906 "uuid": "76457b4b-6f21-45c3-bed2-aa5cd5fafb47", 00:09:37.906 "strip_size_kb": 64, 00:09:37.906 "state": "configuring", 00:09:37.906 "raid_level": "raid0", 00:09:37.906 "superblock": true, 00:09:37.906 "num_base_bdevs": 4, 00:09:37.906 "num_base_bdevs_discovered": 2, 00:09:37.906 "num_base_bdevs_operational": 4, 00:09:37.906 "base_bdevs_list": [ 00:09:37.906 { 00:09:37.906 "name": "BaseBdev1", 00:09:37.906 "uuid": "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5", 00:09:37.906 "is_configured": true, 00:09:37.906 "data_offset": 2048, 00:09:37.906 "data_size": 63488 00:09:37.906 }, 00:09:37.906 { 00:09:37.906 "name": "BaseBdev2", 00:09:37.906 "uuid": "d684000f-dbad-4d7b-b1d9-88583e78f1ee", 00:09:37.906 "is_configured": true, 00:09:37.906 "data_offset": 2048, 00:09:37.906 "data_size": 63488 00:09:37.906 }, 00:09:37.906 { 00:09:37.906 "name": "BaseBdev3", 00:09:37.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.906 "is_configured": false, 00:09:37.906 "data_offset": 0, 00:09:37.906 "data_size": 0 00:09:37.906 }, 00:09:37.906 { 00:09:37.906 "name": "BaseBdev4", 00:09:37.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.906 "is_configured": false, 00:09:37.906 "data_offset": 0, 00:09:37.906 "data_size": 0 00:09:37.906 } 00:09:37.906 ] 00:09:37.906 }' 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.906 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.477 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:38.477 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.477 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.477 [2024-11-26 15:25:36.678642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.477 BaseBdev3 00:09:38.477 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.478 [ 00:09:38.478 { 00:09:38.478 "name": "BaseBdev3", 00:09:38.478 "aliases": [ 00:09:38.478 "b25f4ef0-29cb-4582-9d9f-d341d693f966" 00:09:38.478 ], 00:09:38.478 "product_name": "Malloc disk", 00:09:38.478 "block_size": 512, 00:09:38.478 "num_blocks": 65536, 00:09:38.478 "uuid": "b25f4ef0-29cb-4582-9d9f-d341d693f966", 00:09:38.478 "assigned_rate_limits": { 00:09:38.478 "rw_ios_per_sec": 0, 00:09:38.478 "rw_mbytes_per_sec": 0, 00:09:38.478 "r_mbytes_per_sec": 0, 00:09:38.478 "w_mbytes_per_sec": 0 00:09:38.478 }, 00:09:38.478 "claimed": true, 00:09:38.478 "claim_type": "exclusive_write", 00:09:38.478 "zoned": false, 00:09:38.478 "supported_io_types": { 00:09:38.478 "read": true, 00:09:38.478 "write": true, 00:09:38.478 "unmap": true, 00:09:38.478 "flush": true, 00:09:38.478 "reset": true, 00:09:38.478 "nvme_admin": false, 00:09:38.478 "nvme_io": false, 00:09:38.478 "nvme_io_md": false, 00:09:38.478 "write_zeroes": true, 00:09:38.478 "zcopy": true, 00:09:38.478 "get_zone_info": false, 00:09:38.478 "zone_management": false, 00:09:38.478 "zone_append": false, 00:09:38.478 "compare": false, 00:09:38.478 "compare_and_write": false, 00:09:38.478 "abort": true, 00:09:38.478 "seek_hole": false, 00:09:38.478 "seek_data": false, 00:09:38.478 "copy": true, 00:09:38.478 "nvme_iov_md": false 00:09:38.478 }, 00:09:38.478 "memory_domains": [ 00:09:38.478 { 00:09:38.478 "dma_device_id": "system", 00:09:38.478 "dma_device_type": 1 00:09:38.478 }, 00:09:38.478 { 00:09:38.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.478 "dma_device_type": 2 00:09:38.478 } 00:09:38.478 ], 00:09:38.478 "driver_specific": {} 00:09:38.478 } 00:09:38.478 ] 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.478 "name": "Existed_Raid", 00:09:38.478 "uuid": "76457b4b-6f21-45c3-bed2-aa5cd5fafb47", 00:09:38.478 "strip_size_kb": 64, 00:09:38.478 "state": "configuring", 00:09:38.478 "raid_level": "raid0", 00:09:38.478 "superblock": true, 00:09:38.478 "num_base_bdevs": 4, 00:09:38.478 "num_base_bdevs_discovered": 3, 00:09:38.478 "num_base_bdevs_operational": 4, 00:09:38.478 "base_bdevs_list": [ 00:09:38.478 { 00:09:38.478 "name": "BaseBdev1", 00:09:38.478 "uuid": "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5", 00:09:38.478 "is_configured": true, 00:09:38.478 "data_offset": 2048, 00:09:38.478 "data_size": 63488 00:09:38.478 }, 00:09:38.478 { 00:09:38.478 "name": "BaseBdev2", 00:09:38.478 "uuid": "d684000f-dbad-4d7b-b1d9-88583e78f1ee", 00:09:38.478 "is_configured": true, 00:09:38.478 "data_offset": 2048, 00:09:38.478 "data_size": 63488 00:09:38.478 }, 00:09:38.478 { 00:09:38.478 "name": "BaseBdev3", 00:09:38.478 "uuid": "b25f4ef0-29cb-4582-9d9f-d341d693f966", 00:09:38.478 "is_configured": true, 00:09:38.478 "data_offset": 2048, 00:09:38.478 "data_size": 63488 00:09:38.478 }, 00:09:38.478 { 00:09:38.478 "name": "BaseBdev4", 00:09:38.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.478 "is_configured": false, 00:09:38.478 "data_offset": 0, 00:09:38.478 "data_size": 0 00:09:38.478 } 00:09:38.478 ] 00:09:38.478 }' 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.478 15:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.739 [2024-11-26 15:25:37.169988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:38.739 [2024-11-26 15:25:37.170175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:38.739 [2024-11-26 15:25:37.170232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:38.739 [2024-11-26 15:25:37.170533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:38.739 BaseBdev4 00:09:38.739 [2024-11-26 15:25:37.170676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:38.739 [2024-11-26 15:25:37.170698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:38.739 [2024-11-26 15:25:37.170826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.739 [ 00:09:38.739 { 00:09:38.739 "name": "BaseBdev4", 00:09:38.739 "aliases": [ 00:09:38.739 "44979fc3-9f50-4643-b44b-801cbf242c3b" 00:09:38.739 ], 00:09:38.739 "product_name": "Malloc disk", 00:09:38.739 "block_size": 512, 00:09:38.739 "num_blocks": 65536, 00:09:38.739 "uuid": "44979fc3-9f50-4643-b44b-801cbf242c3b", 00:09:38.739 "assigned_rate_limits": { 00:09:38.739 "rw_ios_per_sec": 0, 00:09:38.739 "rw_mbytes_per_sec": 0, 00:09:38.739 "r_mbytes_per_sec": 0, 00:09:38.739 "w_mbytes_per_sec": 0 00:09:38.739 }, 00:09:38.739 "claimed": true, 00:09:38.739 "claim_type": "exclusive_write", 00:09:38.739 "zoned": false, 00:09:38.739 "supported_io_types": { 00:09:38.739 "read": true, 00:09:38.739 "write": true, 00:09:38.739 "unmap": true, 00:09:38.739 "flush": true, 00:09:38.739 "reset": true, 00:09:38.739 "nvme_admin": false, 00:09:38.739 "nvme_io": false, 00:09:38.739 "nvme_io_md": false, 00:09:38.739 "write_zeroes": true, 00:09:38.739 "zcopy": true, 00:09:38.739 "get_zone_info": false, 00:09:38.739 "zone_management": false, 00:09:38.739 "zone_append": false, 00:09:38.739 "compare": false, 00:09:38.739 "compare_and_write": false, 00:09:38.739 "abort": true, 00:09:38.739 "seek_hole": false, 00:09:38.739 "seek_data": false, 00:09:38.739 "copy": true, 00:09:38.739 "nvme_iov_md": false 00:09:38.739 }, 00:09:38.739 "memory_domains": [ 00:09:38.739 { 00:09:38.739 "dma_device_id": "system", 00:09:38.739 "dma_device_type": 1 00:09:38.739 }, 00:09:38.739 { 00:09:38.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.739 "dma_device_type": 2 00:09:38.739 } 00:09:38.739 ], 00:09:38.739 "driver_specific": {} 00:09:38.739 } 00:09:38.739 ] 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.739 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.740 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.740 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.740 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.999 "name": "Existed_Raid", 00:09:38.999 "uuid": "76457b4b-6f21-45c3-bed2-aa5cd5fafb47", 00:09:38.999 "strip_size_kb": 64, 00:09:38.999 "state": "online", 00:09:38.999 "raid_level": "raid0", 00:09:38.999 "superblock": true, 00:09:38.999 "num_base_bdevs": 4, 00:09:38.999 "num_base_bdevs_discovered": 4, 00:09:38.999 "num_base_bdevs_operational": 4, 00:09:38.999 "base_bdevs_list": [ 00:09:38.999 { 00:09:38.999 "name": "BaseBdev1", 00:09:38.999 "uuid": "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5", 00:09:38.999 "is_configured": true, 00:09:38.999 "data_offset": 2048, 00:09:38.999 "data_size": 63488 00:09:38.999 }, 00:09:38.999 { 00:09:38.999 "name": "BaseBdev2", 00:09:38.999 "uuid": "d684000f-dbad-4d7b-b1d9-88583e78f1ee", 00:09:38.999 "is_configured": true, 00:09:38.999 "data_offset": 2048, 00:09:38.999 "data_size": 63488 00:09:38.999 }, 00:09:38.999 { 00:09:38.999 "name": "BaseBdev3", 00:09:38.999 "uuid": "b25f4ef0-29cb-4582-9d9f-d341d693f966", 00:09:38.999 "is_configured": true, 00:09:38.999 "data_offset": 2048, 00:09:38.999 "data_size": 63488 00:09:38.999 }, 00:09:38.999 { 00:09:38.999 "name": "BaseBdev4", 00:09:38.999 "uuid": "44979fc3-9f50-4643-b44b-801cbf242c3b", 00:09:38.999 "is_configured": true, 00:09:38.999 "data_offset": 2048, 00:09:38.999 "data_size": 63488 00:09:38.999 } 00:09:38.999 ] 00:09:38.999 }' 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.999 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.259 [2024-11-26 15:25:37.670474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.259 "name": "Existed_Raid", 00:09:39.259 "aliases": [ 00:09:39.259 "76457b4b-6f21-45c3-bed2-aa5cd5fafb47" 00:09:39.259 ], 00:09:39.259 "product_name": "Raid Volume", 00:09:39.259 "block_size": 512, 00:09:39.259 "num_blocks": 253952, 00:09:39.259 "uuid": "76457b4b-6f21-45c3-bed2-aa5cd5fafb47", 00:09:39.259 "assigned_rate_limits": { 00:09:39.259 "rw_ios_per_sec": 0, 00:09:39.259 "rw_mbytes_per_sec": 0, 00:09:39.259 "r_mbytes_per_sec": 0, 00:09:39.259 "w_mbytes_per_sec": 0 00:09:39.259 }, 00:09:39.259 "claimed": false, 00:09:39.259 "zoned": false, 00:09:39.259 "supported_io_types": { 00:09:39.259 "read": true, 00:09:39.259 "write": true, 00:09:39.259 "unmap": true, 00:09:39.259 "flush": true, 00:09:39.259 "reset": true, 00:09:39.259 "nvme_admin": false, 00:09:39.259 "nvme_io": false, 00:09:39.259 "nvme_io_md": false, 00:09:39.259 "write_zeroes": true, 00:09:39.259 "zcopy": false, 00:09:39.259 "get_zone_info": false, 00:09:39.259 "zone_management": false, 00:09:39.259 "zone_append": false, 00:09:39.259 "compare": false, 00:09:39.259 "compare_and_write": false, 00:09:39.259 "abort": false, 00:09:39.259 "seek_hole": false, 00:09:39.259 "seek_data": false, 00:09:39.259 "copy": false, 00:09:39.259 "nvme_iov_md": false 00:09:39.259 }, 00:09:39.259 "memory_domains": [ 00:09:39.259 { 00:09:39.259 "dma_device_id": "system", 00:09:39.259 "dma_device_type": 1 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.259 "dma_device_type": 2 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "dma_device_id": "system", 00:09:39.259 "dma_device_type": 1 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.259 "dma_device_type": 2 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "dma_device_id": "system", 00:09:39.259 "dma_device_type": 1 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.259 "dma_device_type": 2 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "dma_device_id": "system", 00:09:39.259 "dma_device_type": 1 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.259 "dma_device_type": 2 00:09:39.259 } 00:09:39.259 ], 00:09:39.259 "driver_specific": { 00:09:39.259 "raid": { 00:09:39.259 "uuid": "76457b4b-6f21-45c3-bed2-aa5cd5fafb47", 00:09:39.259 "strip_size_kb": 64, 00:09:39.259 "state": "online", 00:09:39.259 "raid_level": "raid0", 00:09:39.259 "superblock": true, 00:09:39.259 "num_base_bdevs": 4, 00:09:39.259 "num_base_bdevs_discovered": 4, 00:09:39.259 "num_base_bdevs_operational": 4, 00:09:39.259 "base_bdevs_list": [ 00:09:39.259 { 00:09:39.259 "name": "BaseBdev1", 00:09:39.259 "uuid": "57272d7d-4ecd-4f73-a8ad-a8a982a5c7d5", 00:09:39.259 "is_configured": true, 00:09:39.259 "data_offset": 2048, 00:09:39.259 "data_size": 63488 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "name": "BaseBdev2", 00:09:39.259 "uuid": "d684000f-dbad-4d7b-b1d9-88583e78f1ee", 00:09:39.259 "is_configured": true, 00:09:39.259 "data_offset": 2048, 00:09:39.259 "data_size": 63488 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "name": "BaseBdev3", 00:09:39.259 "uuid": "b25f4ef0-29cb-4582-9d9f-d341d693f966", 00:09:39.259 "is_configured": true, 00:09:39.259 "data_offset": 2048, 00:09:39.259 "data_size": 63488 00:09:39.259 }, 00:09:39.259 { 00:09:39.259 "name": "BaseBdev4", 00:09:39.259 "uuid": "44979fc3-9f50-4643-b44b-801cbf242c3b", 00:09:39.259 "is_configured": true, 00:09:39.259 "data_offset": 2048, 00:09:39.259 "data_size": 63488 00:09:39.259 } 00:09:39.259 ] 00:09:39.259 } 00:09:39.259 } 00:09:39.259 }' 00:09:39.259 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:39.520 BaseBdev2 00:09:39.520 BaseBdev3 00:09:39.520 BaseBdev4' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.520 15:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.520 [2024-11-26 15:25:37.990295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.520 [2024-11-26 15:25:37.990321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.520 [2024-11-26 15:25:37.990391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.780 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.780 "name": "Existed_Raid", 00:09:39.780 "uuid": "76457b4b-6f21-45c3-bed2-aa5cd5fafb47", 00:09:39.780 "strip_size_kb": 64, 00:09:39.780 "state": "offline", 00:09:39.780 "raid_level": "raid0", 00:09:39.780 "superblock": true, 00:09:39.780 "num_base_bdevs": 4, 00:09:39.780 "num_base_bdevs_discovered": 3, 00:09:39.780 "num_base_bdevs_operational": 3, 00:09:39.780 "base_bdevs_list": [ 00:09:39.780 { 00:09:39.780 "name": null, 00:09:39.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.781 "is_configured": false, 00:09:39.781 "data_offset": 0, 00:09:39.781 "data_size": 63488 00:09:39.781 }, 00:09:39.781 { 00:09:39.781 "name": "BaseBdev2", 00:09:39.781 "uuid": "d684000f-dbad-4d7b-b1d9-88583e78f1ee", 00:09:39.781 "is_configured": true, 00:09:39.781 "data_offset": 2048, 00:09:39.781 "data_size": 63488 00:09:39.781 }, 00:09:39.781 { 00:09:39.781 "name": "BaseBdev3", 00:09:39.781 "uuid": "b25f4ef0-29cb-4582-9d9f-d341d693f966", 00:09:39.781 "is_configured": true, 00:09:39.781 "data_offset": 2048, 00:09:39.781 "data_size": 63488 00:09:39.781 }, 00:09:39.781 { 00:09:39.781 "name": "BaseBdev4", 00:09:39.781 "uuid": "44979fc3-9f50-4643-b44b-801cbf242c3b", 00:09:39.781 "is_configured": true, 00:09:39.781 "data_offset": 2048, 00:09:39.781 "data_size": 63488 00:09:39.781 } 00:09:39.781 ] 00:09:39.781 }' 00:09:39.781 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.781 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.038 [2024-11-26 15:25:38.441627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:40.038 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.039 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.039 [2024-11-26 15:25:38.504808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.298 [2024-11-26 15:25:38.576003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:40.298 [2024-11-26 15:25:38.576105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.298 BaseBdev2 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.298 [ 00:09:40.298 { 00:09:40.298 "name": "BaseBdev2", 00:09:40.298 "aliases": [ 00:09:40.298 "4c1540e2-3421-4d9d-ac3a-7331af371bf8" 00:09:40.298 ], 00:09:40.298 "product_name": "Malloc disk", 00:09:40.298 "block_size": 512, 00:09:40.298 "num_blocks": 65536, 00:09:40.298 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:40.298 "assigned_rate_limits": { 00:09:40.298 "rw_ios_per_sec": 0, 00:09:40.298 "rw_mbytes_per_sec": 0, 00:09:40.298 "r_mbytes_per_sec": 0, 00:09:40.298 "w_mbytes_per_sec": 0 00:09:40.298 }, 00:09:40.298 "claimed": false, 00:09:40.298 "zoned": false, 00:09:40.298 "supported_io_types": { 00:09:40.298 "read": true, 00:09:40.298 "write": true, 00:09:40.298 "unmap": true, 00:09:40.298 "flush": true, 00:09:40.298 "reset": true, 00:09:40.298 "nvme_admin": false, 00:09:40.298 "nvme_io": false, 00:09:40.298 "nvme_io_md": false, 00:09:40.298 "write_zeroes": true, 00:09:40.298 "zcopy": true, 00:09:40.298 "get_zone_info": false, 00:09:40.298 "zone_management": false, 00:09:40.298 "zone_append": false, 00:09:40.298 "compare": false, 00:09:40.298 "compare_and_write": false, 00:09:40.298 "abort": true, 00:09:40.298 "seek_hole": false, 00:09:40.298 "seek_data": false, 00:09:40.298 "copy": true, 00:09:40.298 "nvme_iov_md": false 00:09:40.298 }, 00:09:40.298 "memory_domains": [ 00:09:40.298 { 00:09:40.298 "dma_device_id": "system", 00:09:40.298 "dma_device_type": 1 00:09:40.298 }, 00:09:40.298 { 00:09:40.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.298 "dma_device_type": 2 00:09:40.298 } 00:09:40.298 ], 00:09:40.298 "driver_specific": {} 00:09:40.298 } 00:09:40.298 ] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.298 BaseBdev3 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.298 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.299 [ 00:09:40.299 { 00:09:40.299 "name": "BaseBdev3", 00:09:40.299 "aliases": [ 00:09:40.299 "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca" 00:09:40.299 ], 00:09:40.299 "product_name": "Malloc disk", 00:09:40.299 "block_size": 512, 00:09:40.299 "num_blocks": 65536, 00:09:40.299 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:40.299 "assigned_rate_limits": { 00:09:40.299 "rw_ios_per_sec": 0, 00:09:40.299 "rw_mbytes_per_sec": 0, 00:09:40.299 "r_mbytes_per_sec": 0, 00:09:40.299 "w_mbytes_per_sec": 0 00:09:40.299 }, 00:09:40.299 "claimed": false, 00:09:40.299 "zoned": false, 00:09:40.299 "supported_io_types": { 00:09:40.299 "read": true, 00:09:40.299 "write": true, 00:09:40.299 "unmap": true, 00:09:40.299 "flush": true, 00:09:40.299 "reset": true, 00:09:40.299 "nvme_admin": false, 00:09:40.299 "nvme_io": false, 00:09:40.299 "nvme_io_md": false, 00:09:40.299 "write_zeroes": true, 00:09:40.299 "zcopy": true, 00:09:40.299 "get_zone_info": false, 00:09:40.299 "zone_management": false, 00:09:40.299 "zone_append": false, 00:09:40.299 "compare": false, 00:09:40.299 "compare_and_write": false, 00:09:40.299 "abort": true, 00:09:40.299 "seek_hole": false, 00:09:40.299 "seek_data": false, 00:09:40.299 "copy": true, 00:09:40.299 "nvme_iov_md": false 00:09:40.299 }, 00:09:40.299 "memory_domains": [ 00:09:40.299 { 00:09:40.299 "dma_device_id": "system", 00:09:40.299 "dma_device_type": 1 00:09:40.299 }, 00:09:40.299 { 00:09:40.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.299 "dma_device_type": 2 00:09:40.299 } 00:09:40.299 ], 00:09:40.299 "driver_specific": {} 00:09:40.299 } 00:09:40.299 ] 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.299 BaseBdev4 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.299 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.559 [ 00:09:40.559 { 00:09:40.559 "name": "BaseBdev4", 00:09:40.559 "aliases": [ 00:09:40.559 "7c33d72b-72bd-48e3-bd81-1ad88d6afb36" 00:09:40.559 ], 00:09:40.559 "product_name": "Malloc disk", 00:09:40.559 "block_size": 512, 00:09:40.559 "num_blocks": 65536, 00:09:40.559 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:40.559 "assigned_rate_limits": { 00:09:40.559 "rw_ios_per_sec": 0, 00:09:40.559 "rw_mbytes_per_sec": 0, 00:09:40.559 "r_mbytes_per_sec": 0, 00:09:40.559 "w_mbytes_per_sec": 0 00:09:40.559 }, 00:09:40.559 "claimed": false, 00:09:40.559 "zoned": false, 00:09:40.559 "supported_io_types": { 00:09:40.559 "read": true, 00:09:40.559 "write": true, 00:09:40.559 "unmap": true, 00:09:40.559 "flush": true, 00:09:40.559 "reset": true, 00:09:40.559 "nvme_admin": false, 00:09:40.559 "nvme_io": false, 00:09:40.559 "nvme_io_md": false, 00:09:40.559 "write_zeroes": true, 00:09:40.559 "zcopy": true, 00:09:40.559 "get_zone_info": false, 00:09:40.559 "zone_management": false, 00:09:40.559 "zone_append": false, 00:09:40.559 "compare": false, 00:09:40.559 "compare_and_write": false, 00:09:40.559 "abort": true, 00:09:40.559 "seek_hole": false, 00:09:40.559 "seek_data": false, 00:09:40.559 "copy": true, 00:09:40.559 "nvme_iov_md": false 00:09:40.559 }, 00:09:40.559 "memory_domains": [ 00:09:40.559 { 00:09:40.559 "dma_device_id": "system", 00:09:40.559 "dma_device_type": 1 00:09:40.559 }, 00:09:40.559 { 00:09:40.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.559 "dma_device_type": 2 00:09:40.559 } 00:09:40.559 ], 00:09:40.559 "driver_specific": {} 00:09:40.559 } 00:09:40.559 ] 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.559 [2024-11-26 15:25:38.803791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.559 [2024-11-26 15:25:38.803892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.559 [2024-11-26 15:25:38.803929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.559 [2024-11-26 15:25:38.805713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.559 [2024-11-26 15:25:38.805798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.559 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.560 "name": "Existed_Raid", 00:09:40.560 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:40.560 "strip_size_kb": 64, 00:09:40.560 "state": "configuring", 00:09:40.560 "raid_level": "raid0", 00:09:40.560 "superblock": true, 00:09:40.560 "num_base_bdevs": 4, 00:09:40.560 "num_base_bdevs_discovered": 3, 00:09:40.560 "num_base_bdevs_operational": 4, 00:09:40.560 "base_bdevs_list": [ 00:09:40.560 { 00:09:40.560 "name": "BaseBdev1", 00:09:40.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.560 "is_configured": false, 00:09:40.560 "data_offset": 0, 00:09:40.560 "data_size": 0 00:09:40.560 }, 00:09:40.560 { 00:09:40.560 "name": "BaseBdev2", 00:09:40.560 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:40.560 "is_configured": true, 00:09:40.560 "data_offset": 2048, 00:09:40.560 "data_size": 63488 00:09:40.560 }, 00:09:40.560 { 00:09:40.560 "name": "BaseBdev3", 00:09:40.560 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:40.560 "is_configured": true, 00:09:40.560 "data_offset": 2048, 00:09:40.560 "data_size": 63488 00:09:40.560 }, 00:09:40.560 { 00:09:40.560 "name": "BaseBdev4", 00:09:40.560 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:40.560 "is_configured": true, 00:09:40.560 "data_offset": 2048, 00:09:40.560 "data_size": 63488 00:09:40.560 } 00:09:40.560 ] 00:09:40.560 }' 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.560 15:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.820 [2024-11-26 15:25:39.207875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.820 "name": "Existed_Raid", 00:09:40.820 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:40.820 "strip_size_kb": 64, 00:09:40.820 "state": "configuring", 00:09:40.820 "raid_level": "raid0", 00:09:40.820 "superblock": true, 00:09:40.820 "num_base_bdevs": 4, 00:09:40.820 "num_base_bdevs_discovered": 2, 00:09:40.820 "num_base_bdevs_operational": 4, 00:09:40.820 "base_bdevs_list": [ 00:09:40.820 { 00:09:40.820 "name": "BaseBdev1", 00:09:40.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.820 "is_configured": false, 00:09:40.820 "data_offset": 0, 00:09:40.820 "data_size": 0 00:09:40.820 }, 00:09:40.820 { 00:09:40.820 "name": null, 00:09:40.820 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:40.820 "is_configured": false, 00:09:40.820 "data_offset": 0, 00:09:40.820 "data_size": 63488 00:09:40.820 }, 00:09:40.820 { 00:09:40.820 "name": "BaseBdev3", 00:09:40.820 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:40.820 "is_configured": true, 00:09:40.820 "data_offset": 2048, 00:09:40.820 "data_size": 63488 00:09:40.820 }, 00:09:40.820 { 00:09:40.820 "name": "BaseBdev4", 00:09:40.820 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:40.820 "is_configured": true, 00:09:40.820 "data_offset": 2048, 00:09:40.820 "data_size": 63488 00:09:40.820 } 00:09:40.820 ] 00:09:40.820 }' 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.820 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.390 [2024-11-26 15:25:39.674986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.390 BaseBdev1 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.390 [ 00:09:41.390 { 00:09:41.390 "name": "BaseBdev1", 00:09:41.390 "aliases": [ 00:09:41.390 "95c4fe9f-b8b6-4ca8-bbab-476f47053c16" 00:09:41.390 ], 00:09:41.390 "product_name": "Malloc disk", 00:09:41.390 "block_size": 512, 00:09:41.390 "num_blocks": 65536, 00:09:41.390 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:41.390 "assigned_rate_limits": { 00:09:41.390 "rw_ios_per_sec": 0, 00:09:41.390 "rw_mbytes_per_sec": 0, 00:09:41.390 "r_mbytes_per_sec": 0, 00:09:41.390 "w_mbytes_per_sec": 0 00:09:41.390 }, 00:09:41.390 "claimed": true, 00:09:41.390 "claim_type": "exclusive_write", 00:09:41.390 "zoned": false, 00:09:41.390 "supported_io_types": { 00:09:41.390 "read": true, 00:09:41.390 "write": true, 00:09:41.390 "unmap": true, 00:09:41.390 "flush": true, 00:09:41.390 "reset": true, 00:09:41.390 "nvme_admin": false, 00:09:41.390 "nvme_io": false, 00:09:41.390 "nvme_io_md": false, 00:09:41.390 "write_zeroes": true, 00:09:41.390 "zcopy": true, 00:09:41.390 "get_zone_info": false, 00:09:41.390 "zone_management": false, 00:09:41.390 "zone_append": false, 00:09:41.390 "compare": false, 00:09:41.390 "compare_and_write": false, 00:09:41.390 "abort": true, 00:09:41.390 "seek_hole": false, 00:09:41.390 "seek_data": false, 00:09:41.390 "copy": true, 00:09:41.390 "nvme_iov_md": false 00:09:41.390 }, 00:09:41.390 "memory_domains": [ 00:09:41.390 { 00:09:41.390 "dma_device_id": "system", 00:09:41.390 "dma_device_type": 1 00:09:41.390 }, 00:09:41.390 { 00:09:41.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.390 "dma_device_type": 2 00:09:41.390 } 00:09:41.390 ], 00:09:41.390 "driver_specific": {} 00:09:41.390 } 00:09:41.390 ] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.390 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.390 "name": "Existed_Raid", 00:09:41.390 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:41.390 "strip_size_kb": 64, 00:09:41.390 "state": "configuring", 00:09:41.390 "raid_level": "raid0", 00:09:41.390 "superblock": true, 00:09:41.390 "num_base_bdevs": 4, 00:09:41.390 "num_base_bdevs_discovered": 3, 00:09:41.390 "num_base_bdevs_operational": 4, 00:09:41.390 "base_bdevs_list": [ 00:09:41.390 { 00:09:41.390 "name": "BaseBdev1", 00:09:41.390 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:41.390 "is_configured": true, 00:09:41.390 "data_offset": 2048, 00:09:41.390 "data_size": 63488 00:09:41.390 }, 00:09:41.390 { 00:09:41.390 "name": null, 00:09:41.390 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:41.390 "is_configured": false, 00:09:41.390 "data_offset": 0, 00:09:41.390 "data_size": 63488 00:09:41.390 }, 00:09:41.390 { 00:09:41.390 "name": "BaseBdev3", 00:09:41.390 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:41.390 "is_configured": true, 00:09:41.390 "data_offset": 2048, 00:09:41.390 "data_size": 63488 00:09:41.390 }, 00:09:41.390 { 00:09:41.390 "name": "BaseBdev4", 00:09:41.390 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:41.390 "is_configured": true, 00:09:41.390 "data_offset": 2048, 00:09:41.390 "data_size": 63488 00:09:41.390 } 00:09:41.390 ] 00:09:41.390 }' 00:09:41.391 15:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.391 15:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 [2024-11-26 15:25:40.179168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.961 "name": "Existed_Raid", 00:09:41.961 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:41.961 "strip_size_kb": 64, 00:09:41.961 "state": "configuring", 00:09:41.961 "raid_level": "raid0", 00:09:41.961 "superblock": true, 00:09:41.961 "num_base_bdevs": 4, 00:09:41.961 "num_base_bdevs_discovered": 2, 00:09:41.961 "num_base_bdevs_operational": 4, 00:09:41.961 "base_bdevs_list": [ 00:09:41.961 { 00:09:41.961 "name": "BaseBdev1", 00:09:41.961 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:41.961 "is_configured": true, 00:09:41.961 "data_offset": 2048, 00:09:41.961 "data_size": 63488 00:09:41.961 }, 00:09:41.961 { 00:09:41.961 "name": null, 00:09:41.961 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:41.961 "is_configured": false, 00:09:41.961 "data_offset": 0, 00:09:41.961 "data_size": 63488 00:09:41.961 }, 00:09:41.961 { 00:09:41.961 "name": null, 00:09:41.961 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:41.961 "is_configured": false, 00:09:41.961 "data_offset": 0, 00:09:41.961 "data_size": 63488 00:09:41.961 }, 00:09:41.961 { 00:09:41.961 "name": "BaseBdev4", 00:09:41.961 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:41.961 "is_configured": true, 00:09:41.961 "data_offset": 2048, 00:09:41.961 "data_size": 63488 00:09:41.961 } 00:09:41.961 ] 00:09:41.961 }' 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.961 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 [2024-11-26 15:25:40.643366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.222 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.222 "name": "Existed_Raid", 00:09:42.222 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:42.222 "strip_size_kb": 64, 00:09:42.222 "state": "configuring", 00:09:42.222 "raid_level": "raid0", 00:09:42.222 "superblock": true, 00:09:42.222 "num_base_bdevs": 4, 00:09:42.222 "num_base_bdevs_discovered": 3, 00:09:42.222 "num_base_bdevs_operational": 4, 00:09:42.222 "base_bdevs_list": [ 00:09:42.222 { 00:09:42.222 "name": "BaseBdev1", 00:09:42.222 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:42.222 "is_configured": true, 00:09:42.222 "data_offset": 2048, 00:09:42.222 "data_size": 63488 00:09:42.222 }, 00:09:42.222 { 00:09:42.222 "name": null, 00:09:42.222 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:42.222 "is_configured": false, 00:09:42.222 "data_offset": 0, 00:09:42.223 "data_size": 63488 00:09:42.223 }, 00:09:42.223 { 00:09:42.223 "name": "BaseBdev3", 00:09:42.223 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:42.223 "is_configured": true, 00:09:42.223 "data_offset": 2048, 00:09:42.223 "data_size": 63488 00:09:42.223 }, 00:09:42.223 { 00:09:42.223 "name": "BaseBdev4", 00:09:42.223 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:42.223 "is_configured": true, 00:09:42.223 "data_offset": 2048, 00:09:42.223 "data_size": 63488 00:09:42.223 } 00:09:42.223 ] 00:09:42.223 }' 00:09:42.223 15:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.223 15:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.792 [2024-11-26 15:25:41.111515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.792 "name": "Existed_Raid", 00:09:42.792 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:42.792 "strip_size_kb": 64, 00:09:42.792 "state": "configuring", 00:09:42.792 "raid_level": "raid0", 00:09:42.792 "superblock": true, 00:09:42.792 "num_base_bdevs": 4, 00:09:42.792 "num_base_bdevs_discovered": 2, 00:09:42.792 "num_base_bdevs_operational": 4, 00:09:42.792 "base_bdevs_list": [ 00:09:42.792 { 00:09:42.792 "name": null, 00:09:42.792 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:42.792 "is_configured": false, 00:09:42.792 "data_offset": 0, 00:09:42.792 "data_size": 63488 00:09:42.792 }, 00:09:42.792 { 00:09:42.792 "name": null, 00:09:42.792 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:42.792 "is_configured": false, 00:09:42.792 "data_offset": 0, 00:09:42.792 "data_size": 63488 00:09:42.792 }, 00:09:42.792 { 00:09:42.792 "name": "BaseBdev3", 00:09:42.792 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:42.792 "is_configured": true, 00:09:42.792 "data_offset": 2048, 00:09:42.792 "data_size": 63488 00:09:42.792 }, 00:09:42.792 { 00:09:42.792 "name": "BaseBdev4", 00:09:42.792 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:42.792 "is_configured": true, 00:09:42.792 "data_offset": 2048, 00:09:42.792 "data_size": 63488 00:09:42.792 } 00:09:42.792 ] 00:09:42.792 }' 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.792 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.390 [2024-11-26 15:25:41.610124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.390 "name": "Existed_Raid", 00:09:43.390 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:43.390 "strip_size_kb": 64, 00:09:43.390 "state": "configuring", 00:09:43.390 "raid_level": "raid0", 00:09:43.390 "superblock": true, 00:09:43.390 "num_base_bdevs": 4, 00:09:43.390 "num_base_bdevs_discovered": 3, 00:09:43.390 "num_base_bdevs_operational": 4, 00:09:43.390 "base_bdevs_list": [ 00:09:43.390 { 00:09:43.390 "name": null, 00:09:43.390 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:43.390 "is_configured": false, 00:09:43.390 "data_offset": 0, 00:09:43.390 "data_size": 63488 00:09:43.390 }, 00:09:43.390 { 00:09:43.390 "name": "BaseBdev2", 00:09:43.390 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:43.390 "is_configured": true, 00:09:43.390 "data_offset": 2048, 00:09:43.390 "data_size": 63488 00:09:43.390 }, 00:09:43.390 { 00:09:43.390 "name": "BaseBdev3", 00:09:43.390 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:43.390 "is_configured": true, 00:09:43.390 "data_offset": 2048, 00:09:43.390 "data_size": 63488 00:09:43.390 }, 00:09:43.390 { 00:09:43.390 "name": "BaseBdev4", 00:09:43.390 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:43.390 "is_configured": true, 00:09:43.390 "data_offset": 2048, 00:09:43.390 "data_size": 63488 00:09:43.390 } 00:09:43.390 ] 00:09:43.390 }' 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.390 15:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 95c4fe9f-b8b6-4ca8-bbab-476f47053c16 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.651 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.911 [2024-11-26 15:25:42.137362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:43.911 NewBaseBdev 00:09:43.911 [2024-11-26 15:25:42.137579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.911 [2024-11-26 15:25:42.137602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:43.911 [2024-11-26 15:25:42.137846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:09:43.911 [2024-11-26 15:25:42.137961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.911 [2024-11-26 15:25:42.137971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:43.911 [2024-11-26 15:25:42.138067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.911 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 [ 00:09:43.912 { 00:09:43.912 "name": "NewBaseBdev", 00:09:43.912 "aliases": [ 00:09:43.912 "95c4fe9f-b8b6-4ca8-bbab-476f47053c16" 00:09:43.912 ], 00:09:43.912 "product_name": "Malloc disk", 00:09:43.912 "block_size": 512, 00:09:43.912 "num_blocks": 65536, 00:09:43.912 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:43.912 "assigned_rate_limits": { 00:09:43.912 "rw_ios_per_sec": 0, 00:09:43.912 "rw_mbytes_per_sec": 0, 00:09:43.912 "r_mbytes_per_sec": 0, 00:09:43.912 "w_mbytes_per_sec": 0 00:09:43.912 }, 00:09:43.912 "claimed": true, 00:09:43.912 "claim_type": "exclusive_write", 00:09:43.912 "zoned": false, 00:09:43.912 "supported_io_types": { 00:09:43.912 "read": true, 00:09:43.912 "write": true, 00:09:43.912 "unmap": true, 00:09:43.912 "flush": true, 00:09:43.912 "reset": true, 00:09:43.912 "nvme_admin": false, 00:09:43.912 "nvme_io": false, 00:09:43.912 "nvme_io_md": false, 00:09:43.912 "write_zeroes": true, 00:09:43.912 "zcopy": true, 00:09:43.912 "get_zone_info": false, 00:09:43.912 "zone_management": false, 00:09:43.912 "zone_append": false, 00:09:43.912 "compare": false, 00:09:43.912 "compare_and_write": false, 00:09:43.912 "abort": true, 00:09:43.912 "seek_hole": false, 00:09:43.912 "seek_data": false, 00:09:43.912 "copy": true, 00:09:43.912 "nvme_iov_md": false 00:09:43.912 }, 00:09:43.912 "memory_domains": [ 00:09:43.912 { 00:09:43.912 "dma_device_id": "system", 00:09:43.912 "dma_device_type": 1 00:09:43.912 }, 00:09:43.912 { 00:09:43.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.912 "dma_device_type": 2 00:09:43.912 } 00:09:43.912 ], 00:09:43.912 "driver_specific": {} 00:09:43.912 } 00:09:43.912 ] 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.912 "name": "Existed_Raid", 00:09:43.912 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:43.912 "strip_size_kb": 64, 00:09:43.912 "state": "online", 00:09:43.912 "raid_level": "raid0", 00:09:43.912 "superblock": true, 00:09:43.912 "num_base_bdevs": 4, 00:09:43.912 "num_base_bdevs_discovered": 4, 00:09:43.912 "num_base_bdevs_operational": 4, 00:09:43.912 "base_bdevs_list": [ 00:09:43.912 { 00:09:43.912 "name": "NewBaseBdev", 00:09:43.912 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:43.912 "is_configured": true, 00:09:43.912 "data_offset": 2048, 00:09:43.912 "data_size": 63488 00:09:43.912 }, 00:09:43.912 { 00:09:43.912 "name": "BaseBdev2", 00:09:43.912 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:43.912 "is_configured": true, 00:09:43.912 "data_offset": 2048, 00:09:43.912 "data_size": 63488 00:09:43.912 }, 00:09:43.912 { 00:09:43.912 "name": "BaseBdev3", 00:09:43.912 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:43.912 "is_configured": true, 00:09:43.912 "data_offset": 2048, 00:09:43.912 "data_size": 63488 00:09:43.912 }, 00:09:43.912 { 00:09:43.912 "name": "BaseBdev4", 00:09:43.912 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:43.912 "is_configured": true, 00:09:43.912 "data_offset": 2048, 00:09:43.912 "data_size": 63488 00:09:43.912 } 00:09:43.912 ] 00:09:43.912 }' 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.912 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.172 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.172 [2024-11-26 15:25:42.625853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.432 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.433 "name": "Existed_Raid", 00:09:44.433 "aliases": [ 00:09:44.433 "1a227dc5-0b5e-48c6-b5ec-a81720a53675" 00:09:44.433 ], 00:09:44.433 "product_name": "Raid Volume", 00:09:44.433 "block_size": 512, 00:09:44.433 "num_blocks": 253952, 00:09:44.433 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:44.433 "assigned_rate_limits": { 00:09:44.433 "rw_ios_per_sec": 0, 00:09:44.433 "rw_mbytes_per_sec": 0, 00:09:44.433 "r_mbytes_per_sec": 0, 00:09:44.433 "w_mbytes_per_sec": 0 00:09:44.433 }, 00:09:44.433 "claimed": false, 00:09:44.433 "zoned": false, 00:09:44.433 "supported_io_types": { 00:09:44.433 "read": true, 00:09:44.433 "write": true, 00:09:44.433 "unmap": true, 00:09:44.433 "flush": true, 00:09:44.433 "reset": true, 00:09:44.433 "nvme_admin": false, 00:09:44.433 "nvme_io": false, 00:09:44.433 "nvme_io_md": false, 00:09:44.433 "write_zeroes": true, 00:09:44.433 "zcopy": false, 00:09:44.433 "get_zone_info": false, 00:09:44.433 "zone_management": false, 00:09:44.433 "zone_append": false, 00:09:44.433 "compare": false, 00:09:44.433 "compare_and_write": false, 00:09:44.433 "abort": false, 00:09:44.433 "seek_hole": false, 00:09:44.433 "seek_data": false, 00:09:44.433 "copy": false, 00:09:44.433 "nvme_iov_md": false 00:09:44.433 }, 00:09:44.433 "memory_domains": [ 00:09:44.433 { 00:09:44.433 "dma_device_id": "system", 00:09:44.433 "dma_device_type": 1 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.433 "dma_device_type": 2 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "dma_device_id": "system", 00:09:44.433 "dma_device_type": 1 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.433 "dma_device_type": 2 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "dma_device_id": "system", 00:09:44.433 "dma_device_type": 1 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.433 "dma_device_type": 2 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "dma_device_id": "system", 00:09:44.433 "dma_device_type": 1 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.433 "dma_device_type": 2 00:09:44.433 } 00:09:44.433 ], 00:09:44.433 "driver_specific": { 00:09:44.433 "raid": { 00:09:44.433 "uuid": "1a227dc5-0b5e-48c6-b5ec-a81720a53675", 00:09:44.433 "strip_size_kb": 64, 00:09:44.433 "state": "online", 00:09:44.433 "raid_level": "raid0", 00:09:44.433 "superblock": true, 00:09:44.433 "num_base_bdevs": 4, 00:09:44.433 "num_base_bdevs_discovered": 4, 00:09:44.433 "num_base_bdevs_operational": 4, 00:09:44.433 "base_bdevs_list": [ 00:09:44.433 { 00:09:44.433 "name": "NewBaseBdev", 00:09:44.433 "uuid": "95c4fe9f-b8b6-4ca8-bbab-476f47053c16", 00:09:44.433 "is_configured": true, 00:09:44.433 "data_offset": 2048, 00:09:44.433 "data_size": 63488 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "name": "BaseBdev2", 00:09:44.433 "uuid": "4c1540e2-3421-4d9d-ac3a-7331af371bf8", 00:09:44.433 "is_configured": true, 00:09:44.433 "data_offset": 2048, 00:09:44.433 "data_size": 63488 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "name": "BaseBdev3", 00:09:44.433 "uuid": "3866d373-c2d9-44bb-b62b-9a3fc9bc9dca", 00:09:44.433 "is_configured": true, 00:09:44.433 "data_offset": 2048, 00:09:44.433 "data_size": 63488 00:09:44.433 }, 00:09:44.433 { 00:09:44.433 "name": "BaseBdev4", 00:09:44.433 "uuid": "7c33d72b-72bd-48e3-bd81-1ad88d6afb36", 00:09:44.433 "is_configured": true, 00:09:44.433 "data_offset": 2048, 00:09:44.433 "data_size": 63488 00:09:44.433 } 00:09:44.433 ] 00:09:44.433 } 00:09:44.433 } 00:09:44.433 }' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:44.433 BaseBdev2 00:09:44.433 BaseBdev3 00:09:44.433 BaseBdev4' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.433 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.694 [2024-11-26 15:25:42.957607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.694 [2024-11-26 15:25:42.957682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.694 [2024-11-26 15:25:42.957797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.694 [2024-11-26 15:25:42.957881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.694 [2024-11-26 15:25:42.957934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82583 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82583 ']' 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82583 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.694 15:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82583 00:09:44.694 killing process with pid 82583 00:09:44.694 15:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.694 15:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.694 15:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82583' 00:09:44.694 15:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82583 00:09:44.694 [2024-11-26 15:25:43.005666] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.694 15:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82583 00:09:44.694 [2024-11-26 15:25:43.046709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.954 15:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.954 00:09:44.954 real 0m9.328s 00:09:44.954 user 0m16.024s 00:09:44.954 sys 0m1.890s 00:09:44.954 ************************************ 00:09:44.954 END TEST raid_state_function_test_sb 00:09:44.954 ************************************ 00:09:44.954 15:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.954 15:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.954 15:25:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:44.954 15:25:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.954 15:25:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.954 15:25:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.954 ************************************ 00:09:44.954 START TEST raid_superblock_test 00:09:44.954 ************************************ 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83226 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83226 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83226 ']' 00:09:44.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.954 15:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.954 [2024-11-26 15:25:43.419703] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:44.954 [2024-11-26 15:25:43.419829] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83226 ] 00:09:45.214 [2024-11-26 15:25:43.554287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:45.214 [2024-11-26 15:25:43.591456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.214 [2024-11-26 15:25:43.616737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.214 [2024-11-26 15:25:43.660109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.214 [2024-11-26 15:25:43.660148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.785 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 malloc1 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 [2024-11-26 15:25:44.264085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.046 [2024-11-26 15:25:44.264225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.046 [2024-11-26 15:25:44.264277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:46.046 [2024-11-26 15:25:44.264328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.046 [2024-11-26 15:25:44.266448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.046 [2024-11-26 15:25:44.266527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.046 pt1 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 malloc2 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 [2024-11-26 15:25:44.292836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.046 [2024-11-26 15:25:44.292886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.046 [2024-11-26 15:25:44.292903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:46.046 [2024-11-26 15:25:44.292911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.046 [2024-11-26 15:25:44.294920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.046 [2024-11-26 15:25:44.294994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.046 pt2 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 malloc3 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 [2024-11-26 15:25:44.321472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.046 [2024-11-26 15:25:44.321558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.046 [2024-11-26 15:25:44.321594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:46.046 [2024-11-26 15:25:44.321621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.046 [2024-11-26 15:25:44.323662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.046 [2024-11-26 15:25:44.323729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.046 pt3 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 malloc4 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 [2024-11-26 15:25:44.359842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:46.046 [2024-11-26 15:25:44.359930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.046 [2024-11-26 15:25:44.359967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:46.046 [2024-11-26 15:25:44.359994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.046 [2024-11-26 15:25:44.362017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.046 [2024-11-26 15:25:44.362088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:46.046 pt4 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.046 [2024-11-26 15:25:44.371911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.046 [2024-11-26 15:25:44.373768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.046 [2024-11-26 15:25:44.373850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.046 [2024-11-26 15:25:44.373912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:46.046 [2024-11-26 15:25:44.374069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:46.046 [2024-11-26 15:25:44.374080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:46.046 [2024-11-26 15:25:44.374323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:46.046 [2024-11-26 15:25:44.374483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:46.046 [2024-11-26 15:25:44.374495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:46.046 [2024-11-26 15:25:44.374600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.046 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.047 "name": "raid_bdev1", 00:09:46.047 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:46.047 "strip_size_kb": 64, 00:09:46.047 "state": "online", 00:09:46.047 "raid_level": "raid0", 00:09:46.047 "superblock": true, 00:09:46.047 "num_base_bdevs": 4, 00:09:46.047 "num_base_bdevs_discovered": 4, 00:09:46.047 "num_base_bdevs_operational": 4, 00:09:46.047 "base_bdevs_list": [ 00:09:46.047 { 00:09:46.047 "name": "pt1", 00:09:46.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.047 "is_configured": true, 00:09:46.047 "data_offset": 2048, 00:09:46.047 "data_size": 63488 00:09:46.047 }, 00:09:46.047 { 00:09:46.047 "name": "pt2", 00:09:46.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.047 "is_configured": true, 00:09:46.047 "data_offset": 2048, 00:09:46.047 "data_size": 63488 00:09:46.047 }, 00:09:46.047 { 00:09:46.047 "name": "pt3", 00:09:46.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.047 "is_configured": true, 00:09:46.047 "data_offset": 2048, 00:09:46.047 "data_size": 63488 00:09:46.047 }, 00:09:46.047 { 00:09:46.047 "name": "pt4", 00:09:46.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.047 "is_configured": true, 00:09:46.047 "data_offset": 2048, 00:09:46.047 "data_size": 63488 00:09:46.047 } 00:09:46.047 ] 00:09:46.047 }' 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.047 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.617 [2024-11-26 15:25:44.816366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.617 "name": "raid_bdev1", 00:09:46.617 "aliases": [ 00:09:46.617 "284d3ea4-57ad-4783-b900-fbe750213799" 00:09:46.617 ], 00:09:46.617 "product_name": "Raid Volume", 00:09:46.617 "block_size": 512, 00:09:46.617 "num_blocks": 253952, 00:09:46.617 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:46.617 "assigned_rate_limits": { 00:09:46.617 "rw_ios_per_sec": 0, 00:09:46.617 "rw_mbytes_per_sec": 0, 00:09:46.617 "r_mbytes_per_sec": 0, 00:09:46.617 "w_mbytes_per_sec": 0 00:09:46.617 }, 00:09:46.617 "claimed": false, 00:09:46.617 "zoned": false, 00:09:46.617 "supported_io_types": { 00:09:46.617 "read": true, 00:09:46.617 "write": true, 00:09:46.617 "unmap": true, 00:09:46.617 "flush": true, 00:09:46.617 "reset": true, 00:09:46.617 "nvme_admin": false, 00:09:46.617 "nvme_io": false, 00:09:46.617 "nvme_io_md": false, 00:09:46.617 "write_zeroes": true, 00:09:46.617 "zcopy": false, 00:09:46.617 "get_zone_info": false, 00:09:46.617 "zone_management": false, 00:09:46.617 "zone_append": false, 00:09:46.617 "compare": false, 00:09:46.617 "compare_and_write": false, 00:09:46.617 "abort": false, 00:09:46.617 "seek_hole": false, 00:09:46.617 "seek_data": false, 00:09:46.617 "copy": false, 00:09:46.617 "nvme_iov_md": false 00:09:46.617 }, 00:09:46.617 "memory_domains": [ 00:09:46.617 { 00:09:46.617 "dma_device_id": "system", 00:09:46.617 "dma_device_type": 1 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.617 "dma_device_type": 2 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "dma_device_id": "system", 00:09:46.617 "dma_device_type": 1 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.617 "dma_device_type": 2 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "dma_device_id": "system", 00:09:46.617 "dma_device_type": 1 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.617 "dma_device_type": 2 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "dma_device_id": "system", 00:09:46.617 "dma_device_type": 1 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.617 "dma_device_type": 2 00:09:46.617 } 00:09:46.617 ], 00:09:46.617 "driver_specific": { 00:09:46.617 "raid": { 00:09:46.617 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:46.617 "strip_size_kb": 64, 00:09:46.617 "state": "online", 00:09:46.617 "raid_level": "raid0", 00:09:46.617 "superblock": true, 00:09:46.617 "num_base_bdevs": 4, 00:09:46.617 "num_base_bdevs_discovered": 4, 00:09:46.617 "num_base_bdevs_operational": 4, 00:09:46.617 "base_bdevs_list": [ 00:09:46.617 { 00:09:46.617 "name": "pt1", 00:09:46.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.617 "is_configured": true, 00:09:46.617 "data_offset": 2048, 00:09:46.617 "data_size": 63488 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "name": "pt2", 00:09:46.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.617 "is_configured": true, 00:09:46.617 "data_offset": 2048, 00:09:46.617 "data_size": 63488 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "name": "pt3", 00:09:46.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.617 "is_configured": true, 00:09:46.617 "data_offset": 2048, 00:09:46.617 "data_size": 63488 00:09:46.617 }, 00:09:46.617 { 00:09:46.617 "name": "pt4", 00:09:46.617 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.617 "is_configured": true, 00:09:46.617 "data_offset": 2048, 00:09:46.617 "data_size": 63488 00:09:46.617 } 00:09:46.617 ] 00:09:46.617 } 00:09:46.617 } 00:09:46.617 }' 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:46.617 pt2 00:09:46.617 pt3 00:09:46.617 pt4' 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:46.617 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.618 15:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.618 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.878 [2024-11-26 15:25:45.092340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=284d3ea4-57ad-4783-b900-fbe750213799 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 284d3ea4-57ad-4783-b900-fbe750213799 ']' 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.878 [2024-11-26 15:25:45.124049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.878 [2024-11-26 15:25:45.124073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.878 [2024-11-26 15:25:45.124140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.878 [2024-11-26 15:25:45.124223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.878 [2024-11-26 15:25:45.124239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.878 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 [2024-11-26 15:25:45.296145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:46.879 [2024-11-26 15:25:45.298106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:46.879 [2024-11-26 15:25:45.298149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:46.879 [2024-11-26 15:25:45.298178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:46.879 [2024-11-26 15:25:45.298234] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:46.879 [2024-11-26 15:25:45.298286] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:46.879 [2024-11-26 15:25:45.298303] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:46.879 [2024-11-26 15:25:45.298319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:46.879 [2024-11-26 15:25:45.298331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.879 [2024-11-26 15:25:45.298349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:46.879 request: 00:09:46.879 { 00:09:46.879 "name": "raid_bdev1", 00:09:46.879 "raid_level": "raid0", 00:09:46.879 "base_bdevs": [ 00:09:46.879 "malloc1", 00:09:46.879 "malloc2", 00:09:46.879 "malloc3", 00:09:46.879 "malloc4" 00:09:46.879 ], 00:09:46.879 "strip_size_kb": 64, 00:09:46.879 "superblock": false, 00:09:46.879 "method": "bdev_raid_create", 00:09:46.879 "req_id": 1 00:09:46.879 } 00:09:46.879 Got JSON-RPC error response 00:09:46.879 response: 00:09:46.879 { 00:09:46.879 "code": -17, 00:09:46.879 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:46.879 } 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.879 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.140 [2024-11-26 15:25:45.360108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.140 [2024-11-26 15:25:45.360205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.140 [2024-11-26 15:25:45.360237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:47.140 [2024-11-26 15:25:45.360266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.140 [2024-11-26 15:25:45.362407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.140 [2024-11-26 15:25:45.362492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.140 [2024-11-26 15:25:45.362577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:47.140 [2024-11-26 15:25:45.362669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.140 pt1 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.140 "name": "raid_bdev1", 00:09:47.140 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:47.140 "strip_size_kb": 64, 00:09:47.140 "state": "configuring", 00:09:47.140 "raid_level": "raid0", 00:09:47.140 "superblock": true, 00:09:47.140 "num_base_bdevs": 4, 00:09:47.140 "num_base_bdevs_discovered": 1, 00:09:47.140 "num_base_bdevs_operational": 4, 00:09:47.140 "base_bdevs_list": [ 00:09:47.140 { 00:09:47.140 "name": "pt1", 00:09:47.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.140 "is_configured": true, 00:09:47.140 "data_offset": 2048, 00:09:47.140 "data_size": 63488 00:09:47.140 }, 00:09:47.140 { 00:09:47.140 "name": null, 00:09:47.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.140 "is_configured": false, 00:09:47.140 "data_offset": 2048, 00:09:47.140 "data_size": 63488 00:09:47.140 }, 00:09:47.140 { 00:09:47.140 "name": null, 00:09:47.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.140 "is_configured": false, 00:09:47.140 "data_offset": 2048, 00:09:47.140 "data_size": 63488 00:09:47.140 }, 00:09:47.140 { 00:09:47.140 "name": null, 00:09:47.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.140 "is_configured": false, 00:09:47.140 "data_offset": 2048, 00:09:47.140 "data_size": 63488 00:09:47.140 } 00:09:47.140 ] 00:09:47.140 }' 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.140 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.401 [2024-11-26 15:25:45.784269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.401 [2024-11-26 15:25:45.784330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.401 [2024-11-26 15:25:45.784348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:47.401 [2024-11-26 15:25:45.784358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.401 [2024-11-26 15:25:45.784756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.401 [2024-11-26 15:25:45.784775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.401 [2024-11-26 15:25:45.784850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:47.401 [2024-11-26 15:25:45.784873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.401 pt2 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.401 [2024-11-26 15:25:45.792260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.401 "name": "raid_bdev1", 00:09:47.401 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:47.401 "strip_size_kb": 64, 00:09:47.401 "state": "configuring", 00:09:47.401 "raid_level": "raid0", 00:09:47.401 "superblock": true, 00:09:47.401 "num_base_bdevs": 4, 00:09:47.401 "num_base_bdevs_discovered": 1, 00:09:47.401 "num_base_bdevs_operational": 4, 00:09:47.401 "base_bdevs_list": [ 00:09:47.401 { 00:09:47.401 "name": "pt1", 00:09:47.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.401 "is_configured": true, 00:09:47.401 "data_offset": 2048, 00:09:47.401 "data_size": 63488 00:09:47.401 }, 00:09:47.401 { 00:09:47.401 "name": null, 00:09:47.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.401 "is_configured": false, 00:09:47.401 "data_offset": 0, 00:09:47.401 "data_size": 63488 00:09:47.401 }, 00:09:47.401 { 00:09:47.401 "name": null, 00:09:47.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.401 "is_configured": false, 00:09:47.401 "data_offset": 2048, 00:09:47.401 "data_size": 63488 00:09:47.401 }, 00:09:47.401 { 00:09:47.401 "name": null, 00:09:47.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.401 "is_configured": false, 00:09:47.401 "data_offset": 2048, 00:09:47.401 "data_size": 63488 00:09:47.401 } 00:09:47.401 ] 00:09:47.401 }' 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.401 15:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.971 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:47.971 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.971 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.971 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.971 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-11-26 15:25:46.172363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.972 [2024-11-26 15:25:46.172502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.972 [2024-11-26 15:25:46.172541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:47.972 [2024-11-26 15:25:46.172569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.972 [2024-11-26 15:25:46.173010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.972 [2024-11-26 15:25:46.173067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.972 [2024-11-26 15:25:46.173185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:47.972 [2024-11-26 15:25:46.173237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.972 pt2 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-11-26 15:25:46.184356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.972 [2024-11-26 15:25:46.184445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.972 [2024-11-26 15:25:46.184479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:47.972 [2024-11-26 15:25:46.184505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.972 [2024-11-26 15:25:46.184890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.972 [2024-11-26 15:25:46.184943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.972 [2024-11-26 15:25:46.185036] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:47.972 [2024-11-26 15:25:46.185086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.972 pt3 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 [2024-11-26 15:25:46.196344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:47.972 [2024-11-26 15:25:46.196428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.972 [2024-11-26 15:25:46.196463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:47.972 [2024-11-26 15:25:46.196472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.972 [2024-11-26 15:25:46.196795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.972 [2024-11-26 15:25:46.196811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:47.972 [2024-11-26 15:25:46.196868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:47.972 [2024-11-26 15:25:46.196885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:47.972 [2024-11-26 15:25:46.196982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:47.972 [2024-11-26 15:25:46.196990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:47.972 [2024-11-26 15:25:46.197220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:47.972 [2024-11-26 15:25:46.197362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:47.972 [2024-11-26 15:25:46.197376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:47.972 [2024-11-26 15:25:46.197469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.972 pt4 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.972 "name": "raid_bdev1", 00:09:47.972 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:47.972 "strip_size_kb": 64, 00:09:47.972 "state": "online", 00:09:47.972 "raid_level": "raid0", 00:09:47.972 "superblock": true, 00:09:47.972 "num_base_bdevs": 4, 00:09:47.972 "num_base_bdevs_discovered": 4, 00:09:47.972 "num_base_bdevs_operational": 4, 00:09:47.972 "base_bdevs_list": [ 00:09:47.972 { 00:09:47.972 "name": "pt1", 00:09:47.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.972 "is_configured": true, 00:09:47.972 "data_offset": 2048, 00:09:47.972 "data_size": 63488 00:09:47.972 }, 00:09:47.972 { 00:09:47.972 "name": "pt2", 00:09:47.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.972 "is_configured": true, 00:09:47.972 "data_offset": 2048, 00:09:47.972 "data_size": 63488 00:09:47.972 }, 00:09:47.972 { 00:09:47.972 "name": "pt3", 00:09:47.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.972 "is_configured": true, 00:09:47.972 "data_offset": 2048, 00:09:47.972 "data_size": 63488 00:09:47.972 }, 00:09:47.972 { 00:09:47.972 "name": "pt4", 00:09:47.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.972 "is_configured": true, 00:09:47.972 "data_offset": 2048, 00:09:47.972 "data_size": 63488 00:09:47.972 } 00:09:47.972 ] 00:09:47.972 }' 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.972 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.232 [2024-11-26 15:25:46.600778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.232 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.232 "name": "raid_bdev1", 00:09:48.232 "aliases": [ 00:09:48.232 "284d3ea4-57ad-4783-b900-fbe750213799" 00:09:48.232 ], 00:09:48.232 "product_name": "Raid Volume", 00:09:48.232 "block_size": 512, 00:09:48.232 "num_blocks": 253952, 00:09:48.232 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:48.232 "assigned_rate_limits": { 00:09:48.232 "rw_ios_per_sec": 0, 00:09:48.232 "rw_mbytes_per_sec": 0, 00:09:48.232 "r_mbytes_per_sec": 0, 00:09:48.232 "w_mbytes_per_sec": 0 00:09:48.233 }, 00:09:48.233 "claimed": false, 00:09:48.233 "zoned": false, 00:09:48.233 "supported_io_types": { 00:09:48.233 "read": true, 00:09:48.233 "write": true, 00:09:48.233 "unmap": true, 00:09:48.233 "flush": true, 00:09:48.233 "reset": true, 00:09:48.233 "nvme_admin": false, 00:09:48.233 "nvme_io": false, 00:09:48.233 "nvme_io_md": false, 00:09:48.233 "write_zeroes": true, 00:09:48.233 "zcopy": false, 00:09:48.233 "get_zone_info": false, 00:09:48.233 "zone_management": false, 00:09:48.233 "zone_append": false, 00:09:48.233 "compare": false, 00:09:48.233 "compare_and_write": false, 00:09:48.233 "abort": false, 00:09:48.233 "seek_hole": false, 00:09:48.233 "seek_data": false, 00:09:48.233 "copy": false, 00:09:48.233 "nvme_iov_md": false 00:09:48.233 }, 00:09:48.233 "memory_domains": [ 00:09:48.233 { 00:09:48.233 "dma_device_id": "system", 00:09:48.233 "dma_device_type": 1 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.233 "dma_device_type": 2 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "dma_device_id": "system", 00:09:48.233 "dma_device_type": 1 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.233 "dma_device_type": 2 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "dma_device_id": "system", 00:09:48.233 "dma_device_type": 1 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.233 "dma_device_type": 2 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "dma_device_id": "system", 00:09:48.233 "dma_device_type": 1 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.233 "dma_device_type": 2 00:09:48.233 } 00:09:48.233 ], 00:09:48.233 "driver_specific": { 00:09:48.233 "raid": { 00:09:48.233 "uuid": "284d3ea4-57ad-4783-b900-fbe750213799", 00:09:48.233 "strip_size_kb": 64, 00:09:48.233 "state": "online", 00:09:48.233 "raid_level": "raid0", 00:09:48.233 "superblock": true, 00:09:48.233 "num_base_bdevs": 4, 00:09:48.233 "num_base_bdevs_discovered": 4, 00:09:48.233 "num_base_bdevs_operational": 4, 00:09:48.233 "base_bdevs_list": [ 00:09:48.233 { 00:09:48.233 "name": "pt1", 00:09:48.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.233 "is_configured": true, 00:09:48.233 "data_offset": 2048, 00:09:48.233 "data_size": 63488 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "name": "pt2", 00:09:48.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.233 "is_configured": true, 00:09:48.233 "data_offset": 2048, 00:09:48.233 "data_size": 63488 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "name": "pt3", 00:09:48.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.233 "is_configured": true, 00:09:48.233 "data_offset": 2048, 00:09:48.233 "data_size": 63488 00:09:48.233 }, 00:09:48.233 { 00:09:48.233 "name": "pt4", 00:09:48.233 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.233 "is_configured": true, 00:09:48.233 "data_offset": 2048, 00:09:48.233 "data_size": 63488 00:09:48.233 } 00:09:48.233 ] 00:09:48.233 } 00:09:48.233 } 00:09:48.233 }' 00:09:48.233 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.233 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.233 pt2 00:09:48.233 pt3 00:09:48.233 pt4' 00:09:48.233 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:48.493 [2024-11-26 15:25:46.924891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 284d3ea4-57ad-4783-b900-fbe750213799 '!=' 284d3ea4-57ad-4783-b900-fbe750213799 ']' 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.493 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.753 15:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83226 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83226 ']' 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83226 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83226 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.754 killing process with pid 83226 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83226' 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83226 00:09:48.754 [2024-11-26 15:25:46.994118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.754 [2024-11-26 15:25:46.994217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.754 [2024-11-26 15:25:46.994299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.754 [2024-11-26 15:25:46.994309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:48.754 15:25:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83226 00:09:48.754 [2024-11-26 15:25:47.038318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.014 15:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:49.014 00:09:49.014 real 0m3.909s 00:09:49.014 user 0m6.205s 00:09:49.014 sys 0m0.813s 00:09:49.014 ************************************ 00:09:49.014 END TEST raid_superblock_test 00:09:49.014 ************************************ 00:09:49.014 15:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.014 15:25:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.014 15:25:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:49.014 15:25:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.014 15:25:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.014 15:25:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.014 ************************************ 00:09:49.014 START TEST raid_read_error_test 00:09:49.014 ************************************ 00:09:49.014 15:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:09:49.014 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:49.014 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:49.014 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:49.014 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.014 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.014 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Q2xSzDYSBR 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83474 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83474 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83474 ']' 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.015 15:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.015 [2024-11-26 15:25:47.418161] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:49.015 [2024-11-26 15:25:47.418291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83474 ] 00:09:49.275 [2024-11-26 15:25:47.552627] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:49.275 [2024-11-26 15:25:47.589036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.275 [2024-11-26 15:25:47.613902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.275 [2024-11-26 15:25:47.656520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.275 [2024-11-26 15:25:47.656639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.846 BaseBdev1_malloc 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.846 true 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.846 [2024-11-26 15:25:48.263938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.846 [2024-11-26 15:25:48.263991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.846 [2024-11-26 15:25:48.264023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.846 [2024-11-26 15:25:48.264047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.846 [2024-11-26 15:25:48.266088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.846 [2024-11-26 15:25:48.266190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.846 BaseBdev1 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.846 BaseBdev2_malloc 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.846 true 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.846 [2024-11-26 15:25:48.304519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.846 [2024-11-26 15:25:48.304563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.846 [2024-11-26 15:25:48.304578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.846 [2024-11-26 15:25:48.304588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.846 [2024-11-26 15:25:48.306596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.846 [2024-11-26 15:25:48.306665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.846 BaseBdev2 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.846 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 BaseBdev3_malloc 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 true 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 [2024-11-26 15:25:48.345146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.107 [2024-11-26 15:25:48.345246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.107 [2024-11-26 15:25:48.345279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:50.107 [2024-11-26 15:25:48.345330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.107 [2024-11-26 15:25:48.347334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.107 [2024-11-26 15:25:48.347371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.107 BaseBdev3 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 BaseBdev4_malloc 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 true 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 [2024-11-26 15:25:48.395696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:50.107 [2024-11-26 15:25:48.395746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.107 [2024-11-26 15:25:48.395762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:50.107 [2024-11-26 15:25:48.395772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.107 [2024-11-26 15:25:48.397762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.107 [2024-11-26 15:25:48.397802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:50.107 BaseBdev4 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 [2024-11-26 15:25:48.407738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.107 [2024-11-26 15:25:48.409519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.107 [2024-11-26 15:25:48.409590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.107 [2024-11-26 15:25:48.409642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:50.107 [2024-11-26 15:25:48.409826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.107 [2024-11-26 15:25:48.409840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:50.107 [2024-11-26 15:25:48.410082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:09:50.107 [2024-11-26 15:25:48.410223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.107 [2024-11-26 15:25:48.410233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.107 [2024-11-26 15:25:48.410352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.107 "name": "raid_bdev1", 00:09:50.107 "uuid": "a3111baa-a0ae-4b75-9713-a407309b6313", 00:09:50.107 "strip_size_kb": 64, 00:09:50.107 "state": "online", 00:09:50.107 "raid_level": "raid0", 00:09:50.107 "superblock": true, 00:09:50.107 "num_base_bdevs": 4, 00:09:50.107 "num_base_bdevs_discovered": 4, 00:09:50.107 "num_base_bdevs_operational": 4, 00:09:50.107 "base_bdevs_list": [ 00:09:50.107 { 00:09:50.107 "name": "BaseBdev1", 00:09:50.107 "uuid": "5a3ac4c2-1ecd-5fbb-8634-b553bf50cf1b", 00:09:50.107 "is_configured": true, 00:09:50.107 "data_offset": 2048, 00:09:50.107 "data_size": 63488 00:09:50.107 }, 00:09:50.107 { 00:09:50.107 "name": "BaseBdev2", 00:09:50.107 "uuid": "55cd3f96-0587-5cca-9391-5bfb0abd16a6", 00:09:50.107 "is_configured": true, 00:09:50.107 "data_offset": 2048, 00:09:50.107 "data_size": 63488 00:09:50.107 }, 00:09:50.107 { 00:09:50.107 "name": "BaseBdev3", 00:09:50.107 "uuid": "4e71995c-090d-50a4-8a23-c8292bfcdb40", 00:09:50.107 "is_configured": true, 00:09:50.107 "data_offset": 2048, 00:09:50.107 "data_size": 63488 00:09:50.107 }, 00:09:50.107 { 00:09:50.107 "name": "BaseBdev4", 00:09:50.107 "uuid": "eb2cc926-87e6-50cc-b0d0-8dedb15789d5", 00:09:50.107 "is_configured": true, 00:09:50.107 "data_offset": 2048, 00:09:50.107 "data_size": 63488 00:09:50.107 } 00:09:50.107 ] 00:09:50.107 }' 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.107 15:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.678 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.678 15:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.678 [2024-11-26 15:25:48.948269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.617 "name": "raid_bdev1", 00:09:51.617 "uuid": "a3111baa-a0ae-4b75-9713-a407309b6313", 00:09:51.617 "strip_size_kb": 64, 00:09:51.617 "state": "online", 00:09:51.617 "raid_level": "raid0", 00:09:51.617 "superblock": true, 00:09:51.617 "num_base_bdevs": 4, 00:09:51.617 "num_base_bdevs_discovered": 4, 00:09:51.617 "num_base_bdevs_operational": 4, 00:09:51.617 "base_bdevs_list": [ 00:09:51.617 { 00:09:51.617 "name": "BaseBdev1", 00:09:51.617 "uuid": "5a3ac4c2-1ecd-5fbb-8634-b553bf50cf1b", 00:09:51.617 "is_configured": true, 00:09:51.617 "data_offset": 2048, 00:09:51.617 "data_size": 63488 00:09:51.617 }, 00:09:51.617 { 00:09:51.617 "name": "BaseBdev2", 00:09:51.617 "uuid": "55cd3f96-0587-5cca-9391-5bfb0abd16a6", 00:09:51.617 "is_configured": true, 00:09:51.617 "data_offset": 2048, 00:09:51.617 "data_size": 63488 00:09:51.617 }, 00:09:51.617 { 00:09:51.617 "name": "BaseBdev3", 00:09:51.617 "uuid": "4e71995c-090d-50a4-8a23-c8292bfcdb40", 00:09:51.617 "is_configured": true, 00:09:51.617 "data_offset": 2048, 00:09:51.617 "data_size": 63488 00:09:51.617 }, 00:09:51.617 { 00:09:51.617 "name": "BaseBdev4", 00:09:51.617 "uuid": "eb2cc926-87e6-50cc-b0d0-8dedb15789d5", 00:09:51.617 "is_configured": true, 00:09:51.617 "data_offset": 2048, 00:09:51.617 "data_size": 63488 00:09:51.617 } 00:09:51.617 ] 00:09:51.617 }' 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.617 15:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.877 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.877 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.877 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.877 [2024-11-26 15:25:50.274511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.877 [2024-11-26 15:25:50.274612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.877 [2024-11-26 15:25:50.277239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.877 [2024-11-26 15:25:50.277335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.877 [2024-11-26 15:25:50.277408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.877 [2024-11-26 15:25:50.277453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:51.877 { 00:09:51.877 "results": [ 00:09:51.877 { 00:09:51.877 "job": "raid_bdev1", 00:09:51.877 "core_mask": "0x1", 00:09:51.877 "workload": "randrw", 00:09:51.877 "percentage": 50, 00:09:51.877 "status": "finished", 00:09:51.877 "queue_depth": 1, 00:09:51.877 "io_size": 131072, 00:09:51.877 "runtime": 1.324435, 00:09:51.877 "iops": 17185.441339137065, 00:09:51.877 "mibps": 2148.180167392133, 00:09:51.877 "io_failed": 1, 00:09:51.877 "io_timeout": 0, 00:09:51.877 "avg_latency_us": 80.81119539276871, 00:09:51.877 "min_latency_us": 24.656149219907608, 00:09:51.877 "max_latency_us": 1378.0667654493159 00:09:51.877 } 00:09:51.877 ], 00:09:51.877 "core_count": 1 00:09:51.877 } 00:09:51.877 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83474 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83474 ']' 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83474 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83474 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83474' 00:09:51.878 killing process with pid 83474 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83474 00:09:51.878 [2024-11-26 15:25:50.318326] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.878 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83474 00:09:52.138 [2024-11-26 15:25:50.354640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Q2xSzDYSBR 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:52.138 ************************************ 00:09:52.138 END TEST raid_read_error_test 00:09:52.138 ************************************ 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:52.138 00:09:52.138 real 0m3.256s 00:09:52.138 user 0m4.081s 00:09:52.138 sys 0m0.543s 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.138 15:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.398 15:25:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:52.398 15:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.398 15:25:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.398 15:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.398 ************************************ 00:09:52.398 START TEST raid_write_error_test 00:09:52.398 ************************************ 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.398 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mBGsf4HysG 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83603 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83603 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83603 ']' 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.399 15:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.399 [2024-11-26 15:25:50.746519] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:52.399 [2024-11-26 15:25:50.746747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83603 ] 00:09:52.659 [2024-11-26 15:25:50.881218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:52.659 [2024-11-26 15:25:50.918718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.659 [2024-11-26 15:25:50.943872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.659 [2024-11-26 15:25:50.987069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.659 [2024-11-26 15:25:50.987100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 BaseBdev1_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 true 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 [2024-11-26 15:25:51.594765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.229 [2024-11-26 15:25:51.594828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.229 [2024-11-26 15:25:51.594846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.229 [2024-11-26 15:25:51.594859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.229 [2024-11-26 15:25:51.596944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.229 [2024-11-26 15:25:51.596982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.229 BaseBdev1 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 BaseBdev2_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 true 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 [2024-11-26 15:25:51.635327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.229 [2024-11-26 15:25:51.635375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.229 [2024-11-26 15:25:51.635405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.229 [2024-11-26 15:25:51.635415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.229 [2024-11-26 15:25:51.637425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.229 [2024-11-26 15:25:51.637523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.229 BaseBdev2 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 BaseBdev3_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 true 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.229 [2024-11-26 15:25:51.675843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:53.229 [2024-11-26 15:25:51.675959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.229 [2024-11-26 15:25:51.675978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:53.229 [2024-11-26 15:25:51.675989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.229 [2024-11-26 15:25:51.677945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.229 [2024-11-26 15:25:51.677984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:53.229 BaseBdev3 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.229 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.490 BaseBdev4_malloc 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.490 true 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.490 [2024-11-26 15:25:51.726548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:53.490 [2024-11-26 15:25:51.726602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.490 [2024-11-26 15:25:51.726618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:53.490 [2024-11-26 15:25:51.726628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.490 [2024-11-26 15:25:51.728554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.490 [2024-11-26 15:25:51.728591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:53.490 BaseBdev4 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.490 [2024-11-26 15:25:51.738584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.490 [2024-11-26 15:25:51.740366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.490 [2024-11-26 15:25:51.740432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.490 [2024-11-26 15:25:51.740489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:53.490 [2024-11-26 15:25:51.740693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:53.490 [2024-11-26 15:25:51.740706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:53.490 [2024-11-26 15:25:51.740944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:09:53.490 [2024-11-26 15:25:51.741068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:53.490 [2024-11-26 15:25:51.741078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:53.490 [2024-11-26 15:25:51.741219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.490 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.491 "name": "raid_bdev1", 00:09:53.491 "uuid": "e3ac1c41-3489-456f-a06a-f8a6ea9545cf", 00:09:53.491 "strip_size_kb": 64, 00:09:53.491 "state": "online", 00:09:53.491 "raid_level": "raid0", 00:09:53.491 "superblock": true, 00:09:53.491 "num_base_bdevs": 4, 00:09:53.491 "num_base_bdevs_discovered": 4, 00:09:53.491 "num_base_bdevs_operational": 4, 00:09:53.491 "base_bdevs_list": [ 00:09:53.491 { 00:09:53.491 "name": "BaseBdev1", 00:09:53.491 "uuid": "3fd84546-bb4f-52a2-9565-ff7cc385f22a", 00:09:53.491 "is_configured": true, 00:09:53.491 "data_offset": 2048, 00:09:53.491 "data_size": 63488 00:09:53.491 }, 00:09:53.491 { 00:09:53.491 "name": "BaseBdev2", 00:09:53.491 "uuid": "d6c0bfa9-dec4-56c7-abe1-14a4ae1aa0da", 00:09:53.491 "is_configured": true, 00:09:53.491 "data_offset": 2048, 00:09:53.491 "data_size": 63488 00:09:53.491 }, 00:09:53.491 { 00:09:53.491 "name": "BaseBdev3", 00:09:53.491 "uuid": "f1224c58-3931-50f3-9add-f571208da97a", 00:09:53.491 "is_configured": true, 00:09:53.491 "data_offset": 2048, 00:09:53.491 "data_size": 63488 00:09:53.491 }, 00:09:53.491 { 00:09:53.491 "name": "BaseBdev4", 00:09:53.491 "uuid": "02e12081-3186-5100-9116-435959fbcb06", 00:09:53.491 "is_configured": true, 00:09:53.491 "data_offset": 2048, 00:09:53.491 "data_size": 63488 00:09:53.491 } 00:09:53.491 ] 00:09:53.491 }' 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.491 15:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 15:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.751 15:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.751 [2024-11-26 15:25:52.223092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.690 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.949 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.949 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.950 "name": "raid_bdev1", 00:09:54.950 "uuid": "e3ac1c41-3489-456f-a06a-f8a6ea9545cf", 00:09:54.950 "strip_size_kb": 64, 00:09:54.950 "state": "online", 00:09:54.950 "raid_level": "raid0", 00:09:54.950 "superblock": true, 00:09:54.950 "num_base_bdevs": 4, 00:09:54.950 "num_base_bdevs_discovered": 4, 00:09:54.950 "num_base_bdevs_operational": 4, 00:09:54.950 "base_bdevs_list": [ 00:09:54.950 { 00:09:54.950 "name": "BaseBdev1", 00:09:54.950 "uuid": "3fd84546-bb4f-52a2-9565-ff7cc385f22a", 00:09:54.950 "is_configured": true, 00:09:54.950 "data_offset": 2048, 00:09:54.950 "data_size": 63488 00:09:54.950 }, 00:09:54.950 { 00:09:54.950 "name": "BaseBdev2", 00:09:54.950 "uuid": "d6c0bfa9-dec4-56c7-abe1-14a4ae1aa0da", 00:09:54.950 "is_configured": true, 00:09:54.950 "data_offset": 2048, 00:09:54.950 "data_size": 63488 00:09:54.950 }, 00:09:54.950 { 00:09:54.950 "name": "BaseBdev3", 00:09:54.950 "uuid": "f1224c58-3931-50f3-9add-f571208da97a", 00:09:54.950 "is_configured": true, 00:09:54.950 "data_offset": 2048, 00:09:54.950 "data_size": 63488 00:09:54.950 }, 00:09:54.950 { 00:09:54.950 "name": "BaseBdev4", 00:09:54.950 "uuid": "02e12081-3186-5100-9116-435959fbcb06", 00:09:54.950 "is_configured": true, 00:09:54.950 "data_offset": 2048, 00:09:54.950 "data_size": 63488 00:09:54.950 } 00:09:54.950 ] 00:09:54.950 }' 00:09:54.950 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.950 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.209 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.209 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.210 [2024-11-26 15:25:53.573593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.210 [2024-11-26 15:25:53.573626] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.210 [2024-11-26 15:25:53.576029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.210 [2024-11-26 15:25:53.576083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.210 [2024-11-26 15:25:53.576127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.210 [2024-11-26 15:25:53.576138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:55.210 { 00:09:55.210 "results": [ 00:09:55.210 { 00:09:55.210 "job": "raid_bdev1", 00:09:55.210 "core_mask": "0x1", 00:09:55.210 "workload": "randrw", 00:09:55.210 "percentage": 50, 00:09:55.210 "status": "finished", 00:09:55.210 "queue_depth": 1, 00:09:55.210 "io_size": 131072, 00:09:55.210 "runtime": 1.348437, 00:09:55.210 "iops": 17179.14889609229, 00:09:55.210 "mibps": 2147.3936120115363, 00:09:55.210 "io_failed": 1, 00:09:55.210 "io_timeout": 0, 00:09:55.210 "avg_latency_us": 80.83586642870759, 00:09:55.210 "min_latency_us": 24.54458293384468, 00:09:55.210 "max_latency_us": 1356.646038525233 00:09:55.210 } 00:09:55.210 ], 00:09:55.210 "core_count": 1 00:09:55.210 } 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83603 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83603 ']' 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83603 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83603 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83603' 00:09:55.210 killing process with pid 83603 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83603 00:09:55.210 [2024-11-26 15:25:53.620146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.210 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83603 00:09:55.210 [2024-11-26 15:25:53.655475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mBGsf4HysG 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:55.471 00:09:55.471 real 0m3.224s 00:09:55.471 user 0m4.026s 00:09:55.471 sys 0m0.526s 00:09:55.471 ************************************ 00:09:55.471 END TEST raid_write_error_test 00:09:55.471 ************************************ 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.471 15:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.471 15:25:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:55.471 15:25:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:55.471 15:25:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.471 15:25:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.471 15:25:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.471 ************************************ 00:09:55.471 START TEST raid_state_function_test 00:09:55.471 ************************************ 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.471 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83730 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83730' 00:09:55.731 Process raid pid: 83730 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83730 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83730 ']' 00:09:55.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.731 15:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.731 [2024-11-26 15:25:54.032157] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:55.731 [2024-11-26 15:25:54.032290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.731 [2024-11-26 15:25:54.167310] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:55.731 [2024-11-26 15:25:54.202814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.990 [2024-11-26 15:25:54.227587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.990 [2024-11-26 15:25:54.269745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.990 [2024-11-26 15:25:54.269778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.558 [2024-11-26 15:25:54.856794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.558 [2024-11-26 15:25:54.856846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.558 [2024-11-26 15:25:54.856858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.558 [2024-11-26 15:25:54.856865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.558 [2024-11-26 15:25:54.856875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.558 [2024-11-26 15:25:54.856882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.558 [2024-11-26 15:25:54.856890] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.558 [2024-11-26 15:25:54.856897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.558 "name": "Existed_Raid", 00:09:56.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.558 "strip_size_kb": 64, 00:09:56.558 "state": "configuring", 00:09:56.558 "raid_level": "concat", 00:09:56.558 "superblock": false, 00:09:56.558 "num_base_bdevs": 4, 00:09:56.558 "num_base_bdevs_discovered": 0, 00:09:56.558 "num_base_bdevs_operational": 4, 00:09:56.558 "base_bdevs_list": [ 00:09:56.558 { 00:09:56.558 "name": "BaseBdev1", 00:09:56.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.558 "is_configured": false, 00:09:56.558 "data_offset": 0, 00:09:56.558 "data_size": 0 00:09:56.558 }, 00:09:56.558 { 00:09:56.558 "name": "BaseBdev2", 00:09:56.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.558 "is_configured": false, 00:09:56.558 "data_offset": 0, 00:09:56.558 "data_size": 0 00:09:56.558 }, 00:09:56.558 { 00:09:56.558 "name": "BaseBdev3", 00:09:56.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.558 "is_configured": false, 00:09:56.558 "data_offset": 0, 00:09:56.558 "data_size": 0 00:09:56.558 }, 00:09:56.558 { 00:09:56.558 "name": "BaseBdev4", 00:09:56.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.558 "is_configured": false, 00:09:56.558 "data_offset": 0, 00:09:56.558 "data_size": 0 00:09:56.558 } 00:09:56.558 ] 00:09:56.558 }' 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.558 15:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.817 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.818 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.818 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.818 [2024-11-26 15:25:55.284775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.818 [2024-11-26 15:25:55.284860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:56.818 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.818 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.818 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.818 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.077 [2024-11-26 15:25:55.296817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.077 [2024-11-26 15:25:55.296897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.077 [2024-11-26 15:25:55.296928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.077 [2024-11-26 15:25:55.296958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.078 [2024-11-26 15:25:55.296977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.078 [2024-11-26 15:25:55.297039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.078 [2024-11-26 15:25:55.297066] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.078 [2024-11-26 15:25:55.297106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 [2024-11-26 15:25:55.317561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.078 BaseBdev1 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 [ 00:09:57.078 { 00:09:57.078 "name": "BaseBdev1", 00:09:57.078 "aliases": [ 00:09:57.078 "00aa8fd5-145e-4e27-b039-c6a33dcc26c7" 00:09:57.078 ], 00:09:57.078 "product_name": "Malloc disk", 00:09:57.078 "block_size": 512, 00:09:57.078 "num_blocks": 65536, 00:09:57.078 "uuid": "00aa8fd5-145e-4e27-b039-c6a33dcc26c7", 00:09:57.078 "assigned_rate_limits": { 00:09:57.078 "rw_ios_per_sec": 0, 00:09:57.078 "rw_mbytes_per_sec": 0, 00:09:57.078 "r_mbytes_per_sec": 0, 00:09:57.078 "w_mbytes_per_sec": 0 00:09:57.078 }, 00:09:57.078 "claimed": true, 00:09:57.078 "claim_type": "exclusive_write", 00:09:57.078 "zoned": false, 00:09:57.078 "supported_io_types": { 00:09:57.078 "read": true, 00:09:57.078 "write": true, 00:09:57.078 "unmap": true, 00:09:57.078 "flush": true, 00:09:57.078 "reset": true, 00:09:57.078 "nvme_admin": false, 00:09:57.078 "nvme_io": false, 00:09:57.078 "nvme_io_md": false, 00:09:57.078 "write_zeroes": true, 00:09:57.078 "zcopy": true, 00:09:57.078 "get_zone_info": false, 00:09:57.078 "zone_management": false, 00:09:57.078 "zone_append": false, 00:09:57.078 "compare": false, 00:09:57.078 "compare_and_write": false, 00:09:57.078 "abort": true, 00:09:57.078 "seek_hole": false, 00:09:57.078 "seek_data": false, 00:09:57.078 "copy": true, 00:09:57.078 "nvme_iov_md": false 00:09:57.078 }, 00:09:57.078 "memory_domains": [ 00:09:57.078 { 00:09:57.078 "dma_device_id": "system", 00:09:57.078 "dma_device_type": 1 00:09:57.078 }, 00:09:57.078 { 00:09:57.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.078 "dma_device_type": 2 00:09:57.078 } 00:09:57.078 ], 00:09:57.078 "driver_specific": {} 00:09:57.078 } 00:09:57.078 ] 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.078 "name": "Existed_Raid", 00:09:57.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.078 "strip_size_kb": 64, 00:09:57.078 "state": "configuring", 00:09:57.078 "raid_level": "concat", 00:09:57.078 "superblock": false, 00:09:57.078 "num_base_bdevs": 4, 00:09:57.078 "num_base_bdevs_discovered": 1, 00:09:57.078 "num_base_bdevs_operational": 4, 00:09:57.078 "base_bdevs_list": [ 00:09:57.078 { 00:09:57.078 "name": "BaseBdev1", 00:09:57.078 "uuid": "00aa8fd5-145e-4e27-b039-c6a33dcc26c7", 00:09:57.078 "is_configured": true, 00:09:57.078 "data_offset": 0, 00:09:57.078 "data_size": 65536 00:09:57.078 }, 00:09:57.078 { 00:09:57.078 "name": "BaseBdev2", 00:09:57.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.078 "is_configured": false, 00:09:57.078 "data_offset": 0, 00:09:57.078 "data_size": 0 00:09:57.078 }, 00:09:57.078 { 00:09:57.078 "name": "BaseBdev3", 00:09:57.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.078 "is_configured": false, 00:09:57.078 "data_offset": 0, 00:09:57.078 "data_size": 0 00:09:57.078 }, 00:09:57.078 { 00:09:57.078 "name": "BaseBdev4", 00:09:57.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.078 "is_configured": false, 00:09:57.078 "data_offset": 0, 00:09:57.078 "data_size": 0 00:09:57.078 } 00:09:57.078 ] 00:09:57.078 }' 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.078 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.338 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.338 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.338 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.338 [2024-11-26 15:25:55.797733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.338 [2024-11-26 15:25:55.797793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:57.338 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.338 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.338 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.338 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.338 [2024-11-26 15:25:55.809774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.338 [2024-11-26 15:25:55.811596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.338 [2024-11-26 15:25:55.811679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.338 [2024-11-26 15:25:55.811695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.338 [2024-11-26 15:25:55.811702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.338 [2024-11-26 15:25:55.811710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.338 [2024-11-26 15:25:55.811717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.613 "name": "Existed_Raid", 00:09:57.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.613 "strip_size_kb": 64, 00:09:57.613 "state": "configuring", 00:09:57.613 "raid_level": "concat", 00:09:57.613 "superblock": false, 00:09:57.613 "num_base_bdevs": 4, 00:09:57.613 "num_base_bdevs_discovered": 1, 00:09:57.613 "num_base_bdevs_operational": 4, 00:09:57.613 "base_bdevs_list": [ 00:09:57.613 { 00:09:57.613 "name": "BaseBdev1", 00:09:57.613 "uuid": "00aa8fd5-145e-4e27-b039-c6a33dcc26c7", 00:09:57.613 "is_configured": true, 00:09:57.613 "data_offset": 0, 00:09:57.613 "data_size": 65536 00:09:57.613 }, 00:09:57.613 { 00:09:57.613 "name": "BaseBdev2", 00:09:57.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.613 "is_configured": false, 00:09:57.613 "data_offset": 0, 00:09:57.613 "data_size": 0 00:09:57.613 }, 00:09:57.613 { 00:09:57.613 "name": "BaseBdev3", 00:09:57.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.613 "is_configured": false, 00:09:57.613 "data_offset": 0, 00:09:57.613 "data_size": 0 00:09:57.613 }, 00:09:57.613 { 00:09:57.613 "name": "BaseBdev4", 00:09:57.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.613 "is_configured": false, 00:09:57.613 "data_offset": 0, 00:09:57.613 "data_size": 0 00:09:57.613 } 00:09:57.613 ] 00:09:57.613 }' 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.613 15:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.873 [2024-11-26 15:25:56.224922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.873 BaseBdev2 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.873 [ 00:09:57.873 { 00:09:57.873 "name": "BaseBdev2", 00:09:57.873 "aliases": [ 00:09:57.873 "b56a9614-e6d0-4b7c-9307-f9723afb2255" 00:09:57.873 ], 00:09:57.873 "product_name": "Malloc disk", 00:09:57.873 "block_size": 512, 00:09:57.873 "num_blocks": 65536, 00:09:57.873 "uuid": "b56a9614-e6d0-4b7c-9307-f9723afb2255", 00:09:57.873 "assigned_rate_limits": { 00:09:57.873 "rw_ios_per_sec": 0, 00:09:57.873 "rw_mbytes_per_sec": 0, 00:09:57.873 "r_mbytes_per_sec": 0, 00:09:57.873 "w_mbytes_per_sec": 0 00:09:57.873 }, 00:09:57.873 "claimed": true, 00:09:57.873 "claim_type": "exclusive_write", 00:09:57.873 "zoned": false, 00:09:57.873 "supported_io_types": { 00:09:57.873 "read": true, 00:09:57.873 "write": true, 00:09:57.873 "unmap": true, 00:09:57.873 "flush": true, 00:09:57.873 "reset": true, 00:09:57.873 "nvme_admin": false, 00:09:57.873 "nvme_io": false, 00:09:57.873 "nvme_io_md": false, 00:09:57.873 "write_zeroes": true, 00:09:57.873 "zcopy": true, 00:09:57.873 "get_zone_info": false, 00:09:57.873 "zone_management": false, 00:09:57.873 "zone_append": false, 00:09:57.873 "compare": false, 00:09:57.873 "compare_and_write": false, 00:09:57.873 "abort": true, 00:09:57.873 "seek_hole": false, 00:09:57.873 "seek_data": false, 00:09:57.873 "copy": true, 00:09:57.873 "nvme_iov_md": false 00:09:57.873 }, 00:09:57.873 "memory_domains": [ 00:09:57.873 { 00:09:57.873 "dma_device_id": "system", 00:09:57.873 "dma_device_type": 1 00:09:57.873 }, 00:09:57.873 { 00:09:57.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.873 "dma_device_type": 2 00:09:57.873 } 00:09:57.873 ], 00:09:57.873 "driver_specific": {} 00:09:57.873 } 00:09:57.873 ] 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.873 "name": "Existed_Raid", 00:09:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.873 "strip_size_kb": 64, 00:09:57.873 "state": "configuring", 00:09:57.873 "raid_level": "concat", 00:09:57.873 "superblock": false, 00:09:57.873 "num_base_bdevs": 4, 00:09:57.873 "num_base_bdevs_discovered": 2, 00:09:57.873 "num_base_bdevs_operational": 4, 00:09:57.873 "base_bdevs_list": [ 00:09:57.873 { 00:09:57.873 "name": "BaseBdev1", 00:09:57.873 "uuid": "00aa8fd5-145e-4e27-b039-c6a33dcc26c7", 00:09:57.873 "is_configured": true, 00:09:57.873 "data_offset": 0, 00:09:57.873 "data_size": 65536 00:09:57.873 }, 00:09:57.873 { 00:09:57.873 "name": "BaseBdev2", 00:09:57.873 "uuid": "b56a9614-e6d0-4b7c-9307-f9723afb2255", 00:09:57.873 "is_configured": true, 00:09:57.873 "data_offset": 0, 00:09:57.873 "data_size": 65536 00:09:57.873 }, 00:09:57.873 { 00:09:57.873 "name": "BaseBdev3", 00:09:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.873 "is_configured": false, 00:09:57.873 "data_offset": 0, 00:09:57.873 "data_size": 0 00:09:57.873 }, 00:09:57.873 { 00:09:57.873 "name": "BaseBdev4", 00:09:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.873 "is_configured": false, 00:09:57.873 "data_offset": 0, 00:09:57.873 "data_size": 0 00:09:57.873 } 00:09:57.873 ] 00:09:57.873 }' 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.873 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 [2024-11-26 15:25:56.679031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.442 BaseBdev3 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.442 [ 00:09:58.442 { 00:09:58.442 "name": "BaseBdev3", 00:09:58.442 "aliases": [ 00:09:58.442 "4f56004b-cfb0-42c2-acd9-80bbda4aa6aa" 00:09:58.442 ], 00:09:58.442 "product_name": "Malloc disk", 00:09:58.442 "block_size": 512, 00:09:58.442 "num_blocks": 65536, 00:09:58.442 "uuid": "4f56004b-cfb0-42c2-acd9-80bbda4aa6aa", 00:09:58.442 "assigned_rate_limits": { 00:09:58.442 "rw_ios_per_sec": 0, 00:09:58.442 "rw_mbytes_per_sec": 0, 00:09:58.442 "r_mbytes_per_sec": 0, 00:09:58.442 "w_mbytes_per_sec": 0 00:09:58.442 }, 00:09:58.442 "claimed": true, 00:09:58.442 "claim_type": "exclusive_write", 00:09:58.442 "zoned": false, 00:09:58.442 "supported_io_types": { 00:09:58.442 "read": true, 00:09:58.442 "write": true, 00:09:58.442 "unmap": true, 00:09:58.442 "flush": true, 00:09:58.442 "reset": true, 00:09:58.442 "nvme_admin": false, 00:09:58.442 "nvme_io": false, 00:09:58.442 "nvme_io_md": false, 00:09:58.442 "write_zeroes": true, 00:09:58.442 "zcopy": true, 00:09:58.442 "get_zone_info": false, 00:09:58.442 "zone_management": false, 00:09:58.442 "zone_append": false, 00:09:58.442 "compare": false, 00:09:58.442 "compare_and_write": false, 00:09:58.442 "abort": true, 00:09:58.442 "seek_hole": false, 00:09:58.442 "seek_data": false, 00:09:58.442 "copy": true, 00:09:58.442 "nvme_iov_md": false 00:09:58.442 }, 00:09:58.442 "memory_domains": [ 00:09:58.442 { 00:09:58.442 "dma_device_id": "system", 00:09:58.442 "dma_device_type": 1 00:09:58.442 }, 00:09:58.442 { 00:09:58.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.442 "dma_device_type": 2 00:09:58.442 } 00:09:58.442 ], 00:09:58.442 "driver_specific": {} 00:09:58.442 } 00:09:58.442 ] 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.442 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.443 "name": "Existed_Raid", 00:09:58.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.443 "strip_size_kb": 64, 00:09:58.443 "state": "configuring", 00:09:58.443 "raid_level": "concat", 00:09:58.443 "superblock": false, 00:09:58.443 "num_base_bdevs": 4, 00:09:58.443 "num_base_bdevs_discovered": 3, 00:09:58.443 "num_base_bdevs_operational": 4, 00:09:58.443 "base_bdevs_list": [ 00:09:58.443 { 00:09:58.443 "name": "BaseBdev1", 00:09:58.443 "uuid": "00aa8fd5-145e-4e27-b039-c6a33dcc26c7", 00:09:58.443 "is_configured": true, 00:09:58.443 "data_offset": 0, 00:09:58.443 "data_size": 65536 00:09:58.443 }, 00:09:58.443 { 00:09:58.443 "name": "BaseBdev2", 00:09:58.443 "uuid": "b56a9614-e6d0-4b7c-9307-f9723afb2255", 00:09:58.443 "is_configured": true, 00:09:58.443 "data_offset": 0, 00:09:58.443 "data_size": 65536 00:09:58.443 }, 00:09:58.443 { 00:09:58.443 "name": "BaseBdev3", 00:09:58.443 "uuid": "4f56004b-cfb0-42c2-acd9-80bbda4aa6aa", 00:09:58.443 "is_configured": true, 00:09:58.443 "data_offset": 0, 00:09:58.443 "data_size": 65536 00:09:58.443 }, 00:09:58.443 { 00:09:58.443 "name": "BaseBdev4", 00:09:58.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.443 "is_configured": false, 00:09:58.443 "data_offset": 0, 00:09:58.443 "data_size": 0 00:09:58.443 } 00:09:58.443 ] 00:09:58.443 }' 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.443 15:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.701 [2024-11-26 15:25:57.150248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.701 [2024-11-26 15:25:57.150381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:58.701 [2024-11-26 15:25:57.150401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:58.701 [2024-11-26 15:25:57.150705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:58.701 [2024-11-26 15:25:57.150834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:58.701 [2024-11-26 15:25:57.150844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:58.701 [2024-11-26 15:25:57.151044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.701 BaseBdev4 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.701 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.960 [ 00:09:58.960 { 00:09:58.960 "name": "BaseBdev4", 00:09:58.960 "aliases": [ 00:09:58.960 "c559f029-4f5d-429c-9a0d-c81fb9043bee" 00:09:58.960 ], 00:09:58.960 "product_name": "Malloc disk", 00:09:58.960 "block_size": 512, 00:09:58.960 "num_blocks": 65536, 00:09:58.960 "uuid": "c559f029-4f5d-429c-9a0d-c81fb9043bee", 00:09:58.960 "assigned_rate_limits": { 00:09:58.960 "rw_ios_per_sec": 0, 00:09:58.960 "rw_mbytes_per_sec": 0, 00:09:58.960 "r_mbytes_per_sec": 0, 00:09:58.960 "w_mbytes_per_sec": 0 00:09:58.960 }, 00:09:58.960 "claimed": true, 00:09:58.960 "claim_type": "exclusive_write", 00:09:58.960 "zoned": false, 00:09:58.960 "supported_io_types": { 00:09:58.960 "read": true, 00:09:58.960 "write": true, 00:09:58.960 "unmap": true, 00:09:58.960 "flush": true, 00:09:58.960 "reset": true, 00:09:58.960 "nvme_admin": false, 00:09:58.960 "nvme_io": false, 00:09:58.960 "nvme_io_md": false, 00:09:58.960 "write_zeroes": true, 00:09:58.960 "zcopy": true, 00:09:58.960 "get_zone_info": false, 00:09:58.960 "zone_management": false, 00:09:58.960 "zone_append": false, 00:09:58.960 "compare": false, 00:09:58.960 "compare_and_write": false, 00:09:58.960 "abort": true, 00:09:58.960 "seek_hole": false, 00:09:58.960 "seek_data": false, 00:09:58.960 "copy": true, 00:09:58.960 "nvme_iov_md": false 00:09:58.960 }, 00:09:58.960 "memory_domains": [ 00:09:58.960 { 00:09:58.960 "dma_device_id": "system", 00:09:58.960 "dma_device_type": 1 00:09:58.960 }, 00:09:58.960 { 00:09:58.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.960 "dma_device_type": 2 00:09:58.960 } 00:09:58.960 ], 00:09:58.960 "driver_specific": {} 00:09:58.960 } 00:09:58.960 ] 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.960 "name": "Existed_Raid", 00:09:58.960 "uuid": "b76fe81b-b19d-4953-bcf1-b7a29b9a1924", 00:09:58.960 "strip_size_kb": 64, 00:09:58.960 "state": "online", 00:09:58.960 "raid_level": "concat", 00:09:58.960 "superblock": false, 00:09:58.960 "num_base_bdevs": 4, 00:09:58.960 "num_base_bdevs_discovered": 4, 00:09:58.960 "num_base_bdevs_operational": 4, 00:09:58.960 "base_bdevs_list": [ 00:09:58.960 { 00:09:58.960 "name": "BaseBdev1", 00:09:58.960 "uuid": "00aa8fd5-145e-4e27-b039-c6a33dcc26c7", 00:09:58.960 "is_configured": true, 00:09:58.960 "data_offset": 0, 00:09:58.960 "data_size": 65536 00:09:58.960 }, 00:09:58.960 { 00:09:58.960 "name": "BaseBdev2", 00:09:58.960 "uuid": "b56a9614-e6d0-4b7c-9307-f9723afb2255", 00:09:58.960 "is_configured": true, 00:09:58.960 "data_offset": 0, 00:09:58.960 "data_size": 65536 00:09:58.960 }, 00:09:58.960 { 00:09:58.960 "name": "BaseBdev3", 00:09:58.960 "uuid": "4f56004b-cfb0-42c2-acd9-80bbda4aa6aa", 00:09:58.960 "is_configured": true, 00:09:58.960 "data_offset": 0, 00:09:58.960 "data_size": 65536 00:09:58.960 }, 00:09:58.960 { 00:09:58.960 "name": "BaseBdev4", 00:09:58.960 "uuid": "c559f029-4f5d-429c-9a0d-c81fb9043bee", 00:09:58.960 "is_configured": true, 00:09:58.960 "data_offset": 0, 00:09:58.960 "data_size": 65536 00:09:58.960 } 00:09:58.960 ] 00:09:58.960 }' 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.960 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.218 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.218 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.218 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.219 [2024-11-26 15:25:57.614747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.219 "name": "Existed_Raid", 00:09:59.219 "aliases": [ 00:09:59.219 "b76fe81b-b19d-4953-bcf1-b7a29b9a1924" 00:09:59.219 ], 00:09:59.219 "product_name": "Raid Volume", 00:09:59.219 "block_size": 512, 00:09:59.219 "num_blocks": 262144, 00:09:59.219 "uuid": "b76fe81b-b19d-4953-bcf1-b7a29b9a1924", 00:09:59.219 "assigned_rate_limits": { 00:09:59.219 "rw_ios_per_sec": 0, 00:09:59.219 "rw_mbytes_per_sec": 0, 00:09:59.219 "r_mbytes_per_sec": 0, 00:09:59.219 "w_mbytes_per_sec": 0 00:09:59.219 }, 00:09:59.219 "claimed": false, 00:09:59.219 "zoned": false, 00:09:59.219 "supported_io_types": { 00:09:59.219 "read": true, 00:09:59.219 "write": true, 00:09:59.219 "unmap": true, 00:09:59.219 "flush": true, 00:09:59.219 "reset": true, 00:09:59.219 "nvme_admin": false, 00:09:59.219 "nvme_io": false, 00:09:59.219 "nvme_io_md": false, 00:09:59.219 "write_zeroes": true, 00:09:59.219 "zcopy": false, 00:09:59.219 "get_zone_info": false, 00:09:59.219 "zone_management": false, 00:09:59.219 "zone_append": false, 00:09:59.219 "compare": false, 00:09:59.219 "compare_and_write": false, 00:09:59.219 "abort": false, 00:09:59.219 "seek_hole": false, 00:09:59.219 "seek_data": false, 00:09:59.219 "copy": false, 00:09:59.219 "nvme_iov_md": false 00:09:59.219 }, 00:09:59.219 "memory_domains": [ 00:09:59.219 { 00:09:59.219 "dma_device_id": "system", 00:09:59.219 "dma_device_type": 1 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.219 "dma_device_type": 2 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "dma_device_id": "system", 00:09:59.219 "dma_device_type": 1 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.219 "dma_device_type": 2 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "dma_device_id": "system", 00:09:59.219 "dma_device_type": 1 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.219 "dma_device_type": 2 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "dma_device_id": "system", 00:09:59.219 "dma_device_type": 1 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.219 "dma_device_type": 2 00:09:59.219 } 00:09:59.219 ], 00:09:59.219 "driver_specific": { 00:09:59.219 "raid": { 00:09:59.219 "uuid": "b76fe81b-b19d-4953-bcf1-b7a29b9a1924", 00:09:59.219 "strip_size_kb": 64, 00:09:59.219 "state": "online", 00:09:59.219 "raid_level": "concat", 00:09:59.219 "superblock": false, 00:09:59.219 "num_base_bdevs": 4, 00:09:59.219 "num_base_bdevs_discovered": 4, 00:09:59.219 "num_base_bdevs_operational": 4, 00:09:59.219 "base_bdevs_list": [ 00:09:59.219 { 00:09:59.219 "name": "BaseBdev1", 00:09:59.219 "uuid": "00aa8fd5-145e-4e27-b039-c6a33dcc26c7", 00:09:59.219 "is_configured": true, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 65536 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "name": "BaseBdev2", 00:09:59.219 "uuid": "b56a9614-e6d0-4b7c-9307-f9723afb2255", 00:09:59.219 "is_configured": true, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 65536 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "name": "BaseBdev3", 00:09:59.219 "uuid": "4f56004b-cfb0-42c2-acd9-80bbda4aa6aa", 00:09:59.219 "is_configured": true, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 65536 00:09:59.219 }, 00:09:59.219 { 00:09:59.219 "name": "BaseBdev4", 00:09:59.219 "uuid": "c559f029-4f5d-429c-9a0d-c81fb9043bee", 00:09:59.219 "is_configured": true, 00:09:59.219 "data_offset": 0, 00:09:59.219 "data_size": 65536 00:09:59.219 } 00:09:59.219 ] 00:09:59.219 } 00:09:59.219 } 00:09:59.219 }' 00:09:59.219 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.479 BaseBdev2 00:09:59.479 BaseBdev3 00:09:59.479 BaseBdev4' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.479 [2024-11-26 15:25:57.930562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.479 [2024-11-26 15:25:57.930636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.479 [2024-11-26 15:25:57.930709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.479 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.737 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.737 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.737 "name": "Existed_Raid", 00:09:59.737 "uuid": "b76fe81b-b19d-4953-bcf1-b7a29b9a1924", 00:09:59.737 "strip_size_kb": 64, 00:09:59.737 "state": "offline", 00:09:59.737 "raid_level": "concat", 00:09:59.737 "superblock": false, 00:09:59.738 "num_base_bdevs": 4, 00:09:59.738 "num_base_bdevs_discovered": 3, 00:09:59.738 "num_base_bdevs_operational": 3, 00:09:59.738 "base_bdevs_list": [ 00:09:59.738 { 00:09:59.738 "name": null, 00:09:59.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.738 "is_configured": false, 00:09:59.738 "data_offset": 0, 00:09:59.738 "data_size": 65536 00:09:59.738 }, 00:09:59.738 { 00:09:59.738 "name": "BaseBdev2", 00:09:59.738 "uuid": "b56a9614-e6d0-4b7c-9307-f9723afb2255", 00:09:59.738 "is_configured": true, 00:09:59.738 "data_offset": 0, 00:09:59.738 "data_size": 65536 00:09:59.738 }, 00:09:59.738 { 00:09:59.738 "name": "BaseBdev3", 00:09:59.738 "uuid": "4f56004b-cfb0-42c2-acd9-80bbda4aa6aa", 00:09:59.738 "is_configured": true, 00:09:59.738 "data_offset": 0, 00:09:59.738 "data_size": 65536 00:09:59.738 }, 00:09:59.738 { 00:09:59.738 "name": "BaseBdev4", 00:09:59.738 "uuid": "c559f029-4f5d-429c-9a0d-c81fb9043bee", 00:09:59.738 "is_configured": true, 00:09:59.738 "data_offset": 0, 00:09:59.738 "data_size": 65536 00:09:59.738 } 00:09:59.738 ] 00:09:59.738 }' 00:09:59.738 15:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.738 15:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.996 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.997 [2024-11-26 15:25:58.446029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.997 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 [2024-11-26 15:25:58.513022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 [2024-11-26 15:25:58.584367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:00.257 [2024-11-26 15:25:58.584473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 BaseBdev2 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 [ 00:10:00.257 { 00:10:00.257 "name": "BaseBdev2", 00:10:00.257 "aliases": [ 00:10:00.257 "4e7784a8-1f66-48f1-9477-47898ef59cd0" 00:10:00.257 ], 00:10:00.257 "product_name": "Malloc disk", 00:10:00.257 "block_size": 512, 00:10:00.257 "num_blocks": 65536, 00:10:00.257 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:00.257 "assigned_rate_limits": { 00:10:00.257 "rw_ios_per_sec": 0, 00:10:00.257 "rw_mbytes_per_sec": 0, 00:10:00.257 "r_mbytes_per_sec": 0, 00:10:00.257 "w_mbytes_per_sec": 0 00:10:00.257 }, 00:10:00.257 "claimed": false, 00:10:00.257 "zoned": false, 00:10:00.257 "supported_io_types": { 00:10:00.257 "read": true, 00:10:00.257 "write": true, 00:10:00.257 "unmap": true, 00:10:00.257 "flush": true, 00:10:00.257 "reset": true, 00:10:00.257 "nvme_admin": false, 00:10:00.257 "nvme_io": false, 00:10:00.257 "nvme_io_md": false, 00:10:00.257 "write_zeroes": true, 00:10:00.257 "zcopy": true, 00:10:00.257 "get_zone_info": false, 00:10:00.257 "zone_management": false, 00:10:00.257 "zone_append": false, 00:10:00.257 "compare": false, 00:10:00.257 "compare_and_write": false, 00:10:00.257 "abort": true, 00:10:00.257 "seek_hole": false, 00:10:00.257 "seek_data": false, 00:10:00.257 "copy": true, 00:10:00.257 "nvme_iov_md": false 00:10:00.257 }, 00:10:00.257 "memory_domains": [ 00:10:00.257 { 00:10:00.257 "dma_device_id": "system", 00:10:00.257 "dma_device_type": 1 00:10:00.257 }, 00:10:00.257 { 00:10:00.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.257 "dma_device_type": 2 00:10:00.257 } 00:10:00.257 ], 00:10:00.257 "driver_specific": {} 00:10:00.257 } 00:10:00.257 ] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.257 BaseBdev3 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.257 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.517 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.517 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.517 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.517 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.517 [ 00:10:00.517 { 00:10:00.517 "name": "BaseBdev3", 00:10:00.517 "aliases": [ 00:10:00.517 "ebbcce97-fe0b-40ab-a982-8785f565a767" 00:10:00.517 ], 00:10:00.517 "product_name": "Malloc disk", 00:10:00.517 "block_size": 512, 00:10:00.517 "num_blocks": 65536, 00:10:00.517 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:00.517 "assigned_rate_limits": { 00:10:00.517 "rw_ios_per_sec": 0, 00:10:00.517 "rw_mbytes_per_sec": 0, 00:10:00.517 "r_mbytes_per_sec": 0, 00:10:00.517 "w_mbytes_per_sec": 0 00:10:00.517 }, 00:10:00.517 "claimed": false, 00:10:00.517 "zoned": false, 00:10:00.517 "supported_io_types": { 00:10:00.517 "read": true, 00:10:00.517 "write": true, 00:10:00.517 "unmap": true, 00:10:00.517 "flush": true, 00:10:00.517 "reset": true, 00:10:00.517 "nvme_admin": false, 00:10:00.517 "nvme_io": false, 00:10:00.517 "nvme_io_md": false, 00:10:00.517 "write_zeroes": true, 00:10:00.517 "zcopy": true, 00:10:00.517 "get_zone_info": false, 00:10:00.517 "zone_management": false, 00:10:00.517 "zone_append": false, 00:10:00.517 "compare": false, 00:10:00.517 "compare_and_write": false, 00:10:00.517 "abort": true, 00:10:00.517 "seek_hole": false, 00:10:00.517 "seek_data": false, 00:10:00.517 "copy": true, 00:10:00.517 "nvme_iov_md": false 00:10:00.517 }, 00:10:00.517 "memory_domains": [ 00:10:00.517 { 00:10:00.517 "dma_device_id": "system", 00:10:00.517 "dma_device_type": 1 00:10:00.517 }, 00:10:00.517 { 00:10:00.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.517 "dma_device_type": 2 00:10:00.517 } 00:10:00.517 ], 00:10:00.517 "driver_specific": {} 00:10:00.517 } 00:10:00.517 ] 00:10:00.517 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.517 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.517 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.518 BaseBdev4 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.518 [ 00:10:00.518 { 00:10:00.518 "name": "BaseBdev4", 00:10:00.518 "aliases": [ 00:10:00.518 "4deee179-a594-4d39-b415-837259e10a6f" 00:10:00.518 ], 00:10:00.518 "product_name": "Malloc disk", 00:10:00.518 "block_size": 512, 00:10:00.518 "num_blocks": 65536, 00:10:00.518 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:00.518 "assigned_rate_limits": { 00:10:00.518 "rw_ios_per_sec": 0, 00:10:00.518 "rw_mbytes_per_sec": 0, 00:10:00.518 "r_mbytes_per_sec": 0, 00:10:00.518 "w_mbytes_per_sec": 0 00:10:00.518 }, 00:10:00.518 "claimed": false, 00:10:00.518 "zoned": false, 00:10:00.518 "supported_io_types": { 00:10:00.518 "read": true, 00:10:00.518 "write": true, 00:10:00.518 "unmap": true, 00:10:00.518 "flush": true, 00:10:00.518 "reset": true, 00:10:00.518 "nvme_admin": false, 00:10:00.518 "nvme_io": false, 00:10:00.518 "nvme_io_md": false, 00:10:00.518 "write_zeroes": true, 00:10:00.518 "zcopy": true, 00:10:00.518 "get_zone_info": false, 00:10:00.518 "zone_management": false, 00:10:00.518 "zone_append": false, 00:10:00.518 "compare": false, 00:10:00.518 "compare_and_write": false, 00:10:00.518 "abort": true, 00:10:00.518 "seek_hole": false, 00:10:00.518 "seek_data": false, 00:10:00.518 "copy": true, 00:10:00.518 "nvme_iov_md": false 00:10:00.518 }, 00:10:00.518 "memory_domains": [ 00:10:00.518 { 00:10:00.518 "dma_device_id": "system", 00:10:00.518 "dma_device_type": 1 00:10:00.518 }, 00:10:00.518 { 00:10:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.518 "dma_device_type": 2 00:10:00.518 } 00:10:00.518 ], 00:10:00.518 "driver_specific": {} 00:10:00.518 } 00:10:00.518 ] 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.518 [2024-11-26 15:25:58.816993] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.518 [2024-11-26 15:25:58.817080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.518 [2024-11-26 15:25:58.817132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.518 [2024-11-26 15:25:58.818900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.518 [2024-11-26 15:25:58.818983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.518 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.518 "name": "Existed_Raid", 00:10:00.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.518 "strip_size_kb": 64, 00:10:00.518 "state": "configuring", 00:10:00.518 "raid_level": "concat", 00:10:00.518 "superblock": false, 00:10:00.518 "num_base_bdevs": 4, 00:10:00.518 "num_base_bdevs_discovered": 3, 00:10:00.518 "num_base_bdevs_operational": 4, 00:10:00.518 "base_bdevs_list": [ 00:10:00.518 { 00:10:00.518 "name": "BaseBdev1", 00:10:00.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.518 "is_configured": false, 00:10:00.518 "data_offset": 0, 00:10:00.518 "data_size": 0 00:10:00.518 }, 00:10:00.518 { 00:10:00.518 "name": "BaseBdev2", 00:10:00.518 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:00.518 "is_configured": true, 00:10:00.519 "data_offset": 0, 00:10:00.519 "data_size": 65536 00:10:00.519 }, 00:10:00.519 { 00:10:00.519 "name": "BaseBdev3", 00:10:00.519 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:00.519 "is_configured": true, 00:10:00.519 "data_offset": 0, 00:10:00.519 "data_size": 65536 00:10:00.519 }, 00:10:00.519 { 00:10:00.519 "name": "BaseBdev4", 00:10:00.519 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:00.519 "is_configured": true, 00:10:00.519 "data_offset": 0, 00:10:00.519 "data_size": 65536 00:10:00.519 } 00:10:00.519 ] 00:10:00.519 }' 00:10:00.519 15:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.519 15:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.778 [2024-11-26 15:25:59.189063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.778 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.779 "name": "Existed_Raid", 00:10:00.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.779 "strip_size_kb": 64, 00:10:00.779 "state": "configuring", 00:10:00.779 "raid_level": "concat", 00:10:00.779 "superblock": false, 00:10:00.779 "num_base_bdevs": 4, 00:10:00.779 "num_base_bdevs_discovered": 2, 00:10:00.779 "num_base_bdevs_operational": 4, 00:10:00.779 "base_bdevs_list": [ 00:10:00.779 { 00:10:00.779 "name": "BaseBdev1", 00:10:00.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.779 "is_configured": false, 00:10:00.779 "data_offset": 0, 00:10:00.779 "data_size": 0 00:10:00.779 }, 00:10:00.779 { 00:10:00.779 "name": null, 00:10:00.779 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:00.779 "is_configured": false, 00:10:00.779 "data_offset": 0, 00:10:00.779 "data_size": 65536 00:10:00.779 }, 00:10:00.779 { 00:10:00.779 "name": "BaseBdev3", 00:10:00.779 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:00.779 "is_configured": true, 00:10:00.779 "data_offset": 0, 00:10:00.779 "data_size": 65536 00:10:00.779 }, 00:10:00.779 { 00:10:00.779 "name": "BaseBdev4", 00:10:00.779 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:00.779 "is_configured": true, 00:10:00.779 "data_offset": 0, 00:10:00.779 "data_size": 65536 00:10:00.779 } 00:10:00.779 ] 00:10:00.779 }' 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.779 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.348 BaseBdev1 00:10:01.348 [2024-11-26 15:25:59.696278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.348 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.348 [ 00:10:01.348 { 00:10:01.348 "name": "BaseBdev1", 00:10:01.348 "aliases": [ 00:10:01.348 "730bb6bc-68f5-48eb-adae-24559ae07267" 00:10:01.348 ], 00:10:01.348 "product_name": "Malloc disk", 00:10:01.348 "block_size": 512, 00:10:01.348 "num_blocks": 65536, 00:10:01.349 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:01.349 "assigned_rate_limits": { 00:10:01.349 "rw_ios_per_sec": 0, 00:10:01.349 "rw_mbytes_per_sec": 0, 00:10:01.349 "r_mbytes_per_sec": 0, 00:10:01.349 "w_mbytes_per_sec": 0 00:10:01.349 }, 00:10:01.349 "claimed": true, 00:10:01.349 "claim_type": "exclusive_write", 00:10:01.349 "zoned": false, 00:10:01.349 "supported_io_types": { 00:10:01.349 "read": true, 00:10:01.349 "write": true, 00:10:01.349 "unmap": true, 00:10:01.349 "flush": true, 00:10:01.349 "reset": true, 00:10:01.349 "nvme_admin": false, 00:10:01.349 "nvme_io": false, 00:10:01.349 "nvme_io_md": false, 00:10:01.349 "write_zeroes": true, 00:10:01.349 "zcopy": true, 00:10:01.349 "get_zone_info": false, 00:10:01.349 "zone_management": false, 00:10:01.349 "zone_append": false, 00:10:01.349 "compare": false, 00:10:01.349 "compare_and_write": false, 00:10:01.349 "abort": true, 00:10:01.349 "seek_hole": false, 00:10:01.349 "seek_data": false, 00:10:01.349 "copy": true, 00:10:01.349 "nvme_iov_md": false 00:10:01.349 }, 00:10:01.349 "memory_domains": [ 00:10:01.349 { 00:10:01.349 "dma_device_id": "system", 00:10:01.349 "dma_device_type": 1 00:10:01.349 }, 00:10:01.349 { 00:10:01.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.349 "dma_device_type": 2 00:10:01.349 } 00:10:01.349 ], 00:10:01.349 "driver_specific": {} 00:10:01.349 } 00:10:01.349 ] 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.349 "name": "Existed_Raid", 00:10:01.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.349 "strip_size_kb": 64, 00:10:01.349 "state": "configuring", 00:10:01.349 "raid_level": "concat", 00:10:01.349 "superblock": false, 00:10:01.349 "num_base_bdevs": 4, 00:10:01.349 "num_base_bdevs_discovered": 3, 00:10:01.349 "num_base_bdevs_operational": 4, 00:10:01.349 "base_bdevs_list": [ 00:10:01.349 { 00:10:01.349 "name": "BaseBdev1", 00:10:01.349 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:01.349 "is_configured": true, 00:10:01.349 "data_offset": 0, 00:10:01.349 "data_size": 65536 00:10:01.349 }, 00:10:01.349 { 00:10:01.349 "name": null, 00:10:01.349 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:01.349 "is_configured": false, 00:10:01.349 "data_offset": 0, 00:10:01.349 "data_size": 65536 00:10:01.349 }, 00:10:01.349 { 00:10:01.349 "name": "BaseBdev3", 00:10:01.349 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:01.349 "is_configured": true, 00:10:01.349 "data_offset": 0, 00:10:01.349 "data_size": 65536 00:10:01.349 }, 00:10:01.349 { 00:10:01.349 "name": "BaseBdev4", 00:10:01.349 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:01.349 "is_configured": true, 00:10:01.349 "data_offset": 0, 00:10:01.349 "data_size": 65536 00:10:01.349 } 00:10:01.349 ] 00:10:01.349 }' 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.349 15:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.917 [2024-11-26 15:26:00.256481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.917 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.918 "name": "Existed_Raid", 00:10:01.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.918 "strip_size_kb": 64, 00:10:01.918 "state": "configuring", 00:10:01.918 "raid_level": "concat", 00:10:01.918 "superblock": false, 00:10:01.918 "num_base_bdevs": 4, 00:10:01.918 "num_base_bdevs_discovered": 2, 00:10:01.918 "num_base_bdevs_operational": 4, 00:10:01.918 "base_bdevs_list": [ 00:10:01.918 { 00:10:01.918 "name": "BaseBdev1", 00:10:01.918 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:01.918 "is_configured": true, 00:10:01.918 "data_offset": 0, 00:10:01.918 "data_size": 65536 00:10:01.918 }, 00:10:01.918 { 00:10:01.918 "name": null, 00:10:01.918 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:01.918 "is_configured": false, 00:10:01.918 "data_offset": 0, 00:10:01.918 "data_size": 65536 00:10:01.918 }, 00:10:01.918 { 00:10:01.918 "name": null, 00:10:01.918 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:01.918 "is_configured": false, 00:10:01.918 "data_offset": 0, 00:10:01.918 "data_size": 65536 00:10:01.918 }, 00:10:01.918 { 00:10:01.918 "name": "BaseBdev4", 00:10:01.918 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:01.918 "is_configured": true, 00:10:01.918 "data_offset": 0, 00:10:01.918 "data_size": 65536 00:10:01.918 } 00:10:01.918 ] 00:10:01.918 }' 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.918 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.177 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.177 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.177 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.177 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.437 [2024-11-26 15:26:00.684661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.437 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.437 "name": "Existed_Raid", 00:10:02.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.437 "strip_size_kb": 64, 00:10:02.437 "state": "configuring", 00:10:02.437 "raid_level": "concat", 00:10:02.437 "superblock": false, 00:10:02.437 "num_base_bdevs": 4, 00:10:02.437 "num_base_bdevs_discovered": 3, 00:10:02.437 "num_base_bdevs_operational": 4, 00:10:02.437 "base_bdevs_list": [ 00:10:02.437 { 00:10:02.438 "name": "BaseBdev1", 00:10:02.438 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:02.438 "is_configured": true, 00:10:02.438 "data_offset": 0, 00:10:02.438 "data_size": 65536 00:10:02.438 }, 00:10:02.438 { 00:10:02.438 "name": null, 00:10:02.438 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:02.438 "is_configured": false, 00:10:02.438 "data_offset": 0, 00:10:02.438 "data_size": 65536 00:10:02.438 }, 00:10:02.438 { 00:10:02.438 "name": "BaseBdev3", 00:10:02.438 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:02.438 "is_configured": true, 00:10:02.438 "data_offset": 0, 00:10:02.438 "data_size": 65536 00:10:02.438 }, 00:10:02.438 { 00:10:02.438 "name": "BaseBdev4", 00:10:02.438 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:02.438 "is_configured": true, 00:10:02.438 "data_offset": 0, 00:10:02.438 "data_size": 65536 00:10:02.438 } 00:10:02.438 ] 00:10:02.438 }' 00:10:02.438 15:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.438 15:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.697 [2024-11-26 15:26:01.132799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.697 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.956 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.956 "name": "Existed_Raid", 00:10:02.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.956 "strip_size_kb": 64, 00:10:02.956 "state": "configuring", 00:10:02.956 "raid_level": "concat", 00:10:02.956 "superblock": false, 00:10:02.956 "num_base_bdevs": 4, 00:10:02.956 "num_base_bdevs_discovered": 2, 00:10:02.956 "num_base_bdevs_operational": 4, 00:10:02.956 "base_bdevs_list": [ 00:10:02.956 { 00:10:02.956 "name": null, 00:10:02.956 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:02.956 "is_configured": false, 00:10:02.956 "data_offset": 0, 00:10:02.956 "data_size": 65536 00:10:02.956 }, 00:10:02.956 { 00:10:02.956 "name": null, 00:10:02.956 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:02.956 "is_configured": false, 00:10:02.956 "data_offset": 0, 00:10:02.956 "data_size": 65536 00:10:02.956 }, 00:10:02.956 { 00:10:02.956 "name": "BaseBdev3", 00:10:02.956 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:02.956 "is_configured": true, 00:10:02.956 "data_offset": 0, 00:10:02.956 "data_size": 65536 00:10:02.956 }, 00:10:02.956 { 00:10:02.956 "name": "BaseBdev4", 00:10:02.956 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:02.956 "is_configured": true, 00:10:02.956 "data_offset": 0, 00:10:02.956 "data_size": 65536 00:10:02.956 } 00:10:02.956 ] 00:10:02.956 }' 00:10:02.956 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.956 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.215 [2024-11-26 15:26:01.627503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.215 "name": "Existed_Raid", 00:10:03.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.215 "strip_size_kb": 64, 00:10:03.215 "state": "configuring", 00:10:03.215 "raid_level": "concat", 00:10:03.215 "superblock": false, 00:10:03.215 "num_base_bdevs": 4, 00:10:03.215 "num_base_bdevs_discovered": 3, 00:10:03.215 "num_base_bdevs_operational": 4, 00:10:03.215 "base_bdevs_list": [ 00:10:03.215 { 00:10:03.215 "name": null, 00:10:03.215 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:03.215 "is_configured": false, 00:10:03.215 "data_offset": 0, 00:10:03.215 "data_size": 65536 00:10:03.215 }, 00:10:03.215 { 00:10:03.215 "name": "BaseBdev2", 00:10:03.215 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:03.215 "is_configured": true, 00:10:03.215 "data_offset": 0, 00:10:03.215 "data_size": 65536 00:10:03.215 }, 00:10:03.215 { 00:10:03.215 "name": "BaseBdev3", 00:10:03.215 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:03.215 "is_configured": true, 00:10:03.215 "data_offset": 0, 00:10:03.215 "data_size": 65536 00:10:03.215 }, 00:10:03.215 { 00:10:03.215 "name": "BaseBdev4", 00:10:03.215 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:03.215 "is_configured": true, 00:10:03.215 "data_offset": 0, 00:10:03.215 "data_size": 65536 00:10:03.215 } 00:10:03.215 ] 00:10:03.215 }' 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.215 15:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 730bb6bc-68f5-48eb-adae-24559ae07267 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.785 NewBaseBdev 00:10:03.785 [2024-11-26 15:26:02.202711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.785 [2024-11-26 15:26:02.202753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:03.785 [2024-11-26 15:26:02.202764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:03.785 [2024-11-26 15:26:02.203002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:03.785 [2024-11-26 15:26:02.203113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:03.785 [2024-11-26 15:26:02.203123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:03.785 [2024-11-26 15:26:02.203308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.785 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.786 [ 00:10:03.786 { 00:10:03.786 "name": "NewBaseBdev", 00:10:03.786 "aliases": [ 00:10:03.786 "730bb6bc-68f5-48eb-adae-24559ae07267" 00:10:03.786 ], 00:10:03.786 "product_name": "Malloc disk", 00:10:03.786 "block_size": 512, 00:10:03.786 "num_blocks": 65536, 00:10:03.786 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:03.786 "assigned_rate_limits": { 00:10:03.786 "rw_ios_per_sec": 0, 00:10:03.786 "rw_mbytes_per_sec": 0, 00:10:03.786 "r_mbytes_per_sec": 0, 00:10:03.786 "w_mbytes_per_sec": 0 00:10:03.786 }, 00:10:03.786 "claimed": true, 00:10:03.786 "claim_type": "exclusive_write", 00:10:03.786 "zoned": false, 00:10:03.786 "supported_io_types": { 00:10:03.786 "read": true, 00:10:03.786 "write": true, 00:10:03.786 "unmap": true, 00:10:03.786 "flush": true, 00:10:03.786 "reset": true, 00:10:03.786 "nvme_admin": false, 00:10:03.786 "nvme_io": false, 00:10:03.786 "nvme_io_md": false, 00:10:03.786 "write_zeroes": true, 00:10:03.786 "zcopy": true, 00:10:03.786 "get_zone_info": false, 00:10:03.786 "zone_management": false, 00:10:03.786 "zone_append": false, 00:10:03.786 "compare": false, 00:10:03.786 "compare_and_write": false, 00:10:03.786 "abort": true, 00:10:03.786 "seek_hole": false, 00:10:03.786 "seek_data": false, 00:10:03.786 "copy": true, 00:10:03.786 "nvme_iov_md": false 00:10:03.786 }, 00:10:03.786 "memory_domains": [ 00:10:03.786 { 00:10:03.786 "dma_device_id": "system", 00:10:03.786 "dma_device_type": 1 00:10:03.786 }, 00:10:03.786 { 00:10:03.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.786 "dma_device_type": 2 00:10:03.786 } 00:10:03.786 ], 00:10:03.786 "driver_specific": {} 00:10:03.786 } 00:10:03.786 ] 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.786 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.066 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.066 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.066 "name": "Existed_Raid", 00:10:04.066 "uuid": "684d95eb-4aaa-4840-88d9-068a75776e2f", 00:10:04.066 "strip_size_kb": 64, 00:10:04.066 "state": "online", 00:10:04.066 "raid_level": "concat", 00:10:04.066 "superblock": false, 00:10:04.066 "num_base_bdevs": 4, 00:10:04.066 "num_base_bdevs_discovered": 4, 00:10:04.066 "num_base_bdevs_operational": 4, 00:10:04.066 "base_bdevs_list": [ 00:10:04.066 { 00:10:04.066 "name": "NewBaseBdev", 00:10:04.066 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:04.067 "is_configured": true, 00:10:04.067 "data_offset": 0, 00:10:04.067 "data_size": 65536 00:10:04.067 }, 00:10:04.067 { 00:10:04.067 "name": "BaseBdev2", 00:10:04.067 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:04.067 "is_configured": true, 00:10:04.067 "data_offset": 0, 00:10:04.067 "data_size": 65536 00:10:04.067 }, 00:10:04.067 { 00:10:04.067 "name": "BaseBdev3", 00:10:04.067 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:04.067 "is_configured": true, 00:10:04.067 "data_offset": 0, 00:10:04.067 "data_size": 65536 00:10:04.067 }, 00:10:04.067 { 00:10:04.067 "name": "BaseBdev4", 00:10:04.067 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:04.067 "is_configured": true, 00:10:04.067 "data_offset": 0, 00:10:04.067 "data_size": 65536 00:10:04.067 } 00:10:04.067 ] 00:10:04.067 }' 00:10:04.067 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.067 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.403 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.404 [2024-11-26 15:26:02.647220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.404 "name": "Existed_Raid", 00:10:04.404 "aliases": [ 00:10:04.404 "684d95eb-4aaa-4840-88d9-068a75776e2f" 00:10:04.404 ], 00:10:04.404 "product_name": "Raid Volume", 00:10:04.404 "block_size": 512, 00:10:04.404 "num_blocks": 262144, 00:10:04.404 "uuid": "684d95eb-4aaa-4840-88d9-068a75776e2f", 00:10:04.404 "assigned_rate_limits": { 00:10:04.404 "rw_ios_per_sec": 0, 00:10:04.404 "rw_mbytes_per_sec": 0, 00:10:04.404 "r_mbytes_per_sec": 0, 00:10:04.404 "w_mbytes_per_sec": 0 00:10:04.404 }, 00:10:04.404 "claimed": false, 00:10:04.404 "zoned": false, 00:10:04.404 "supported_io_types": { 00:10:04.404 "read": true, 00:10:04.404 "write": true, 00:10:04.404 "unmap": true, 00:10:04.404 "flush": true, 00:10:04.404 "reset": true, 00:10:04.404 "nvme_admin": false, 00:10:04.404 "nvme_io": false, 00:10:04.404 "nvme_io_md": false, 00:10:04.404 "write_zeroes": true, 00:10:04.404 "zcopy": false, 00:10:04.404 "get_zone_info": false, 00:10:04.404 "zone_management": false, 00:10:04.404 "zone_append": false, 00:10:04.404 "compare": false, 00:10:04.404 "compare_and_write": false, 00:10:04.404 "abort": false, 00:10:04.404 "seek_hole": false, 00:10:04.404 "seek_data": false, 00:10:04.404 "copy": false, 00:10:04.404 "nvme_iov_md": false 00:10:04.404 }, 00:10:04.404 "memory_domains": [ 00:10:04.404 { 00:10:04.404 "dma_device_id": "system", 00:10:04.404 "dma_device_type": 1 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.404 "dma_device_type": 2 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "dma_device_id": "system", 00:10:04.404 "dma_device_type": 1 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.404 "dma_device_type": 2 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "dma_device_id": "system", 00:10:04.404 "dma_device_type": 1 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.404 "dma_device_type": 2 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "dma_device_id": "system", 00:10:04.404 "dma_device_type": 1 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.404 "dma_device_type": 2 00:10:04.404 } 00:10:04.404 ], 00:10:04.404 "driver_specific": { 00:10:04.404 "raid": { 00:10:04.404 "uuid": "684d95eb-4aaa-4840-88d9-068a75776e2f", 00:10:04.404 "strip_size_kb": 64, 00:10:04.404 "state": "online", 00:10:04.404 "raid_level": "concat", 00:10:04.404 "superblock": false, 00:10:04.404 "num_base_bdevs": 4, 00:10:04.404 "num_base_bdevs_discovered": 4, 00:10:04.404 "num_base_bdevs_operational": 4, 00:10:04.404 "base_bdevs_list": [ 00:10:04.404 { 00:10:04.404 "name": "NewBaseBdev", 00:10:04.404 "uuid": "730bb6bc-68f5-48eb-adae-24559ae07267", 00:10:04.404 "is_configured": true, 00:10:04.404 "data_offset": 0, 00:10:04.404 "data_size": 65536 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "name": "BaseBdev2", 00:10:04.404 "uuid": "4e7784a8-1f66-48f1-9477-47898ef59cd0", 00:10:04.404 "is_configured": true, 00:10:04.404 "data_offset": 0, 00:10:04.404 "data_size": 65536 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "name": "BaseBdev3", 00:10:04.404 "uuid": "ebbcce97-fe0b-40ab-a982-8785f565a767", 00:10:04.404 "is_configured": true, 00:10:04.404 "data_offset": 0, 00:10:04.404 "data_size": 65536 00:10:04.404 }, 00:10:04.404 { 00:10:04.404 "name": "BaseBdev4", 00:10:04.404 "uuid": "4deee179-a594-4d39-b415-837259e10a6f", 00:10:04.404 "is_configured": true, 00:10:04.404 "data_offset": 0, 00:10:04.404 "data_size": 65536 00:10:04.404 } 00:10:04.404 ] 00:10:04.404 } 00:10:04.404 } 00:10:04.404 }' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.404 BaseBdev2 00:10:04.404 BaseBdev3 00:10:04.404 BaseBdev4' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.404 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.405 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.405 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.405 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.405 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.405 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.663 [2024-11-26 15:26:02.974966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.663 [2024-11-26 15:26:02.974994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.663 [2024-11-26 15:26:02.975067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.663 [2024-11-26 15:26:02.975133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.663 [2024-11-26 15:26:02.975157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83730 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83730 ']' 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83730 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.663 15:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83730 00:10:04.663 killing process with pid 83730 00:10:04.663 15:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.663 15:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.663 15:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83730' 00:10:04.663 15:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 83730 00:10:04.663 [2024-11-26 15:26:03.022892] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.663 15:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 83730 00:10:04.663 [2024-11-26 15:26:03.063092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.922 00:10:04.922 real 0m9.335s 00:10:04.922 user 0m16.017s 00:10:04.922 sys 0m1.843s 00:10:04.922 ************************************ 00:10:04.922 END TEST raid_state_function_test 00:10:04.922 ************************************ 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.922 15:26:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:04.922 15:26:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.922 15:26:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.922 15:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.922 ************************************ 00:10:04.922 START TEST raid_state_function_test_sb 00:10:04.922 ************************************ 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.922 Process raid pid: 84379 00:10:04.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84379 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84379' 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84379 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84379 ']' 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.922 15:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.181 [2024-11-26 15:26:03.445756] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:05.181 [2024-11-26 15:26:03.445909] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.181 [2024-11-26 15:26:03.579600] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:05.181 [2024-11-26 15:26:03.602700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.181 [2024-11-26 15:26:03.630500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.441 [2024-11-26 15:26:03.673156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.441 [2024-11-26 15:26:03.673299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.011 [2024-11-26 15:26:04.272008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.011 [2024-11-26 15:26:04.272060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.011 [2024-11-26 15:26:04.272071] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.011 [2024-11-26 15:26:04.272079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.011 [2024-11-26 15:26:04.272089] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.011 [2024-11-26 15:26:04.272095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.011 [2024-11-26 15:26:04.272103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.011 [2024-11-26 15:26:04.272109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.011 "name": "Existed_Raid", 00:10:06.011 "uuid": "56a67c24-b57c-4908-8a7d-b4f881ea8bf0", 00:10:06.011 "strip_size_kb": 64, 00:10:06.011 "state": "configuring", 00:10:06.011 "raid_level": "concat", 00:10:06.011 "superblock": true, 00:10:06.011 "num_base_bdevs": 4, 00:10:06.011 "num_base_bdevs_discovered": 0, 00:10:06.011 "num_base_bdevs_operational": 4, 00:10:06.011 "base_bdevs_list": [ 00:10:06.011 { 00:10:06.011 "name": "BaseBdev1", 00:10:06.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.011 "is_configured": false, 00:10:06.011 "data_offset": 0, 00:10:06.011 "data_size": 0 00:10:06.011 }, 00:10:06.011 { 00:10:06.011 "name": "BaseBdev2", 00:10:06.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.011 "is_configured": false, 00:10:06.011 "data_offset": 0, 00:10:06.011 "data_size": 0 00:10:06.011 }, 00:10:06.011 { 00:10:06.011 "name": "BaseBdev3", 00:10:06.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.011 "is_configured": false, 00:10:06.011 "data_offset": 0, 00:10:06.011 "data_size": 0 00:10:06.011 }, 00:10:06.011 { 00:10:06.011 "name": "BaseBdev4", 00:10:06.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.011 "is_configured": false, 00:10:06.011 "data_offset": 0, 00:10:06.011 "data_size": 0 00:10:06.011 } 00:10:06.011 ] 00:10:06.011 }' 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.011 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 [2024-11-26 15:26:04.663997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.271 [2024-11-26 15:26:04.664031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 [2024-11-26 15:26:04.676028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.271 [2024-11-26 15:26:04.676069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.271 [2024-11-26 15:26:04.676080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.271 [2024-11-26 15:26:04.676103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.271 [2024-11-26 15:26:04.676110] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.271 [2024-11-26 15:26:04.676117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.271 [2024-11-26 15:26:04.676124] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.271 [2024-11-26 15:26:04.676131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 [2024-11-26 15:26:04.696866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.271 BaseBdev1 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.271 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.272 [ 00:10:06.272 { 00:10:06.272 "name": "BaseBdev1", 00:10:06.272 "aliases": [ 00:10:06.272 "8ad602e3-412f-45ae-937a-9b57f713170f" 00:10:06.272 ], 00:10:06.272 "product_name": "Malloc disk", 00:10:06.272 "block_size": 512, 00:10:06.272 "num_blocks": 65536, 00:10:06.272 "uuid": "8ad602e3-412f-45ae-937a-9b57f713170f", 00:10:06.272 "assigned_rate_limits": { 00:10:06.272 "rw_ios_per_sec": 0, 00:10:06.272 "rw_mbytes_per_sec": 0, 00:10:06.272 "r_mbytes_per_sec": 0, 00:10:06.272 "w_mbytes_per_sec": 0 00:10:06.272 }, 00:10:06.272 "claimed": true, 00:10:06.272 "claim_type": "exclusive_write", 00:10:06.272 "zoned": false, 00:10:06.272 "supported_io_types": { 00:10:06.272 "read": true, 00:10:06.272 "write": true, 00:10:06.272 "unmap": true, 00:10:06.272 "flush": true, 00:10:06.272 "reset": true, 00:10:06.272 "nvme_admin": false, 00:10:06.272 "nvme_io": false, 00:10:06.272 "nvme_io_md": false, 00:10:06.272 "write_zeroes": true, 00:10:06.272 "zcopy": true, 00:10:06.272 "get_zone_info": false, 00:10:06.272 "zone_management": false, 00:10:06.272 "zone_append": false, 00:10:06.272 "compare": false, 00:10:06.272 "compare_and_write": false, 00:10:06.272 "abort": true, 00:10:06.272 "seek_hole": false, 00:10:06.272 "seek_data": false, 00:10:06.272 "copy": true, 00:10:06.272 "nvme_iov_md": false 00:10:06.272 }, 00:10:06.272 "memory_domains": [ 00:10:06.272 { 00:10:06.272 "dma_device_id": "system", 00:10:06.272 "dma_device_type": 1 00:10:06.272 }, 00:10:06.272 { 00:10:06.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.272 "dma_device_type": 2 00:10:06.272 } 00:10:06.272 ], 00:10:06.272 "driver_specific": {} 00:10:06.272 } 00:10:06.272 ] 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.272 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.532 "name": "Existed_Raid", 00:10:06.532 "uuid": "0a447cff-5440-465a-9126-fd4b141d1596", 00:10:06.532 "strip_size_kb": 64, 00:10:06.532 "state": "configuring", 00:10:06.532 "raid_level": "concat", 00:10:06.532 "superblock": true, 00:10:06.532 "num_base_bdevs": 4, 00:10:06.532 "num_base_bdevs_discovered": 1, 00:10:06.532 "num_base_bdevs_operational": 4, 00:10:06.532 "base_bdevs_list": [ 00:10:06.532 { 00:10:06.532 "name": "BaseBdev1", 00:10:06.532 "uuid": "8ad602e3-412f-45ae-937a-9b57f713170f", 00:10:06.532 "is_configured": true, 00:10:06.532 "data_offset": 2048, 00:10:06.532 "data_size": 63488 00:10:06.532 }, 00:10:06.532 { 00:10:06.532 "name": "BaseBdev2", 00:10:06.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.532 "is_configured": false, 00:10:06.532 "data_offset": 0, 00:10:06.532 "data_size": 0 00:10:06.532 }, 00:10:06.532 { 00:10:06.532 "name": "BaseBdev3", 00:10:06.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.532 "is_configured": false, 00:10:06.532 "data_offset": 0, 00:10:06.532 "data_size": 0 00:10:06.532 }, 00:10:06.532 { 00:10:06.532 "name": "BaseBdev4", 00:10:06.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.532 "is_configured": false, 00:10:06.532 "data_offset": 0, 00:10:06.532 "data_size": 0 00:10:06.532 } 00:10:06.532 ] 00:10:06.532 }' 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.532 15:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.792 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.792 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 [2024-11-26 15:26:05.197033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.792 [2024-11-26 15:26:05.197091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:06.792 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.792 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.792 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.792 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 [2024-11-26 15:26:05.205088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.792 [2024-11-26 15:26:05.207021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.792 [2024-11-26 15:26:05.207094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.792 [2024-11-26 15:26:05.207129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.792 [2024-11-26 15:26:05.207151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.792 [2024-11-26 15:26:05.207185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.792 [2024-11-26 15:26:05.207207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.793 "name": "Existed_Raid", 00:10:06.793 "uuid": "08d804d8-7b65-49e7-93b8-764fad3e1edb", 00:10:06.793 "strip_size_kb": 64, 00:10:06.793 "state": "configuring", 00:10:06.793 "raid_level": "concat", 00:10:06.793 "superblock": true, 00:10:06.793 "num_base_bdevs": 4, 00:10:06.793 "num_base_bdevs_discovered": 1, 00:10:06.793 "num_base_bdevs_operational": 4, 00:10:06.793 "base_bdevs_list": [ 00:10:06.793 { 00:10:06.793 "name": "BaseBdev1", 00:10:06.793 "uuid": "8ad602e3-412f-45ae-937a-9b57f713170f", 00:10:06.793 "is_configured": true, 00:10:06.793 "data_offset": 2048, 00:10:06.793 "data_size": 63488 00:10:06.793 }, 00:10:06.793 { 00:10:06.793 "name": "BaseBdev2", 00:10:06.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.793 "is_configured": false, 00:10:06.793 "data_offset": 0, 00:10:06.793 "data_size": 0 00:10:06.793 }, 00:10:06.793 { 00:10:06.793 "name": "BaseBdev3", 00:10:06.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.793 "is_configured": false, 00:10:06.793 "data_offset": 0, 00:10:06.793 "data_size": 0 00:10:06.793 }, 00:10:06.793 { 00:10:06.793 "name": "BaseBdev4", 00:10:06.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.793 "is_configured": false, 00:10:06.793 "data_offset": 0, 00:10:06.793 "data_size": 0 00:10:06.793 } 00:10:06.793 ] 00:10:06.793 }' 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.793 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.363 BaseBdev2 00:10:07.363 [2024-11-26 15:26:05.644176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.363 [ 00:10:07.363 { 00:10:07.363 "name": "BaseBdev2", 00:10:07.363 "aliases": [ 00:10:07.363 "cba135b3-ec6e-4163-9c0e-4637bf99fb74" 00:10:07.363 ], 00:10:07.363 "product_name": "Malloc disk", 00:10:07.363 "block_size": 512, 00:10:07.363 "num_blocks": 65536, 00:10:07.363 "uuid": "cba135b3-ec6e-4163-9c0e-4637bf99fb74", 00:10:07.363 "assigned_rate_limits": { 00:10:07.363 "rw_ios_per_sec": 0, 00:10:07.363 "rw_mbytes_per_sec": 0, 00:10:07.363 "r_mbytes_per_sec": 0, 00:10:07.363 "w_mbytes_per_sec": 0 00:10:07.363 }, 00:10:07.363 "claimed": true, 00:10:07.363 "claim_type": "exclusive_write", 00:10:07.363 "zoned": false, 00:10:07.363 "supported_io_types": { 00:10:07.363 "read": true, 00:10:07.363 "write": true, 00:10:07.363 "unmap": true, 00:10:07.363 "flush": true, 00:10:07.363 "reset": true, 00:10:07.363 "nvme_admin": false, 00:10:07.363 "nvme_io": false, 00:10:07.363 "nvme_io_md": false, 00:10:07.363 "write_zeroes": true, 00:10:07.363 "zcopy": true, 00:10:07.363 "get_zone_info": false, 00:10:07.363 "zone_management": false, 00:10:07.363 "zone_append": false, 00:10:07.363 "compare": false, 00:10:07.363 "compare_and_write": false, 00:10:07.363 "abort": true, 00:10:07.363 "seek_hole": false, 00:10:07.363 "seek_data": false, 00:10:07.363 "copy": true, 00:10:07.363 "nvme_iov_md": false 00:10:07.363 }, 00:10:07.363 "memory_domains": [ 00:10:07.363 { 00:10:07.363 "dma_device_id": "system", 00:10:07.363 "dma_device_type": 1 00:10:07.363 }, 00:10:07.363 { 00:10:07.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.363 "dma_device_type": 2 00:10:07.363 } 00:10:07.363 ], 00:10:07.363 "driver_specific": {} 00:10:07.363 } 00:10:07.363 ] 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.363 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.364 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.364 "name": "Existed_Raid", 00:10:07.364 "uuid": "08d804d8-7b65-49e7-93b8-764fad3e1edb", 00:10:07.364 "strip_size_kb": 64, 00:10:07.364 "state": "configuring", 00:10:07.364 "raid_level": "concat", 00:10:07.364 "superblock": true, 00:10:07.364 "num_base_bdevs": 4, 00:10:07.364 "num_base_bdevs_discovered": 2, 00:10:07.364 "num_base_bdevs_operational": 4, 00:10:07.364 "base_bdevs_list": [ 00:10:07.364 { 00:10:07.364 "name": "BaseBdev1", 00:10:07.364 "uuid": "8ad602e3-412f-45ae-937a-9b57f713170f", 00:10:07.364 "is_configured": true, 00:10:07.364 "data_offset": 2048, 00:10:07.364 "data_size": 63488 00:10:07.364 }, 00:10:07.364 { 00:10:07.364 "name": "BaseBdev2", 00:10:07.364 "uuid": "cba135b3-ec6e-4163-9c0e-4637bf99fb74", 00:10:07.364 "is_configured": true, 00:10:07.364 "data_offset": 2048, 00:10:07.364 "data_size": 63488 00:10:07.364 }, 00:10:07.364 { 00:10:07.364 "name": "BaseBdev3", 00:10:07.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.364 "is_configured": false, 00:10:07.364 "data_offset": 0, 00:10:07.364 "data_size": 0 00:10:07.364 }, 00:10:07.364 { 00:10:07.364 "name": "BaseBdev4", 00:10:07.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.364 "is_configured": false, 00:10:07.364 "data_offset": 0, 00:10:07.364 "data_size": 0 00:10:07.364 } 00:10:07.364 ] 00:10:07.364 }' 00:10:07.364 15:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.364 15:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.933 [2024-11-26 15:26:06.126664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.933 BaseBdev3 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.933 [ 00:10:07.933 { 00:10:07.933 "name": "BaseBdev3", 00:10:07.933 "aliases": [ 00:10:07.933 "e2b3893a-993e-43e9-8388-d7327fc4a0cd" 00:10:07.933 ], 00:10:07.933 "product_name": "Malloc disk", 00:10:07.933 "block_size": 512, 00:10:07.933 "num_blocks": 65536, 00:10:07.933 "uuid": "e2b3893a-993e-43e9-8388-d7327fc4a0cd", 00:10:07.933 "assigned_rate_limits": { 00:10:07.933 "rw_ios_per_sec": 0, 00:10:07.933 "rw_mbytes_per_sec": 0, 00:10:07.933 "r_mbytes_per_sec": 0, 00:10:07.933 "w_mbytes_per_sec": 0 00:10:07.933 }, 00:10:07.933 "claimed": true, 00:10:07.933 "claim_type": "exclusive_write", 00:10:07.933 "zoned": false, 00:10:07.933 "supported_io_types": { 00:10:07.933 "read": true, 00:10:07.933 "write": true, 00:10:07.933 "unmap": true, 00:10:07.933 "flush": true, 00:10:07.933 "reset": true, 00:10:07.933 "nvme_admin": false, 00:10:07.933 "nvme_io": false, 00:10:07.933 "nvme_io_md": false, 00:10:07.933 "write_zeroes": true, 00:10:07.933 "zcopy": true, 00:10:07.933 "get_zone_info": false, 00:10:07.933 "zone_management": false, 00:10:07.933 "zone_append": false, 00:10:07.933 "compare": false, 00:10:07.933 "compare_and_write": false, 00:10:07.933 "abort": true, 00:10:07.933 "seek_hole": false, 00:10:07.933 "seek_data": false, 00:10:07.933 "copy": true, 00:10:07.933 "nvme_iov_md": false 00:10:07.933 }, 00:10:07.933 "memory_domains": [ 00:10:07.933 { 00:10:07.933 "dma_device_id": "system", 00:10:07.933 "dma_device_type": 1 00:10:07.933 }, 00:10:07.933 { 00:10:07.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.933 "dma_device_type": 2 00:10:07.933 } 00:10:07.933 ], 00:10:07.933 "driver_specific": {} 00:10:07.933 } 00:10:07.933 ] 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.933 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.933 "name": "Existed_Raid", 00:10:07.933 "uuid": "08d804d8-7b65-49e7-93b8-764fad3e1edb", 00:10:07.933 "strip_size_kb": 64, 00:10:07.933 "state": "configuring", 00:10:07.933 "raid_level": "concat", 00:10:07.933 "superblock": true, 00:10:07.933 "num_base_bdevs": 4, 00:10:07.933 "num_base_bdevs_discovered": 3, 00:10:07.933 "num_base_bdevs_operational": 4, 00:10:07.933 "base_bdevs_list": [ 00:10:07.933 { 00:10:07.933 "name": "BaseBdev1", 00:10:07.933 "uuid": "8ad602e3-412f-45ae-937a-9b57f713170f", 00:10:07.933 "is_configured": true, 00:10:07.933 "data_offset": 2048, 00:10:07.933 "data_size": 63488 00:10:07.933 }, 00:10:07.933 { 00:10:07.933 "name": "BaseBdev2", 00:10:07.933 "uuid": "cba135b3-ec6e-4163-9c0e-4637bf99fb74", 00:10:07.933 "is_configured": true, 00:10:07.933 "data_offset": 2048, 00:10:07.933 "data_size": 63488 00:10:07.933 }, 00:10:07.933 { 00:10:07.933 "name": "BaseBdev3", 00:10:07.933 "uuid": "e2b3893a-993e-43e9-8388-d7327fc4a0cd", 00:10:07.933 "is_configured": true, 00:10:07.933 "data_offset": 2048, 00:10:07.933 "data_size": 63488 00:10:07.933 }, 00:10:07.933 { 00:10:07.933 "name": "BaseBdev4", 00:10:07.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.934 "is_configured": false, 00:10:07.934 "data_offset": 0, 00:10:07.934 "data_size": 0 00:10:07.934 } 00:10:07.934 ] 00:10:07.934 }' 00:10:07.934 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.934 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.194 [2024-11-26 15:26:06.621846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:08.194 [2024-11-26 15:26:06.622029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:08.194 [2024-11-26 15:26:06.622050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:08.194 [2024-11-26 15:26:06.622344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:08.194 BaseBdev4 00:10:08.194 [2024-11-26 15:26:06.622491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:08.194 [2024-11-26 15:26:06.622502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:08.194 [2024-11-26 15:26:06.622625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.194 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.194 [ 00:10:08.194 { 00:10:08.194 "name": "BaseBdev4", 00:10:08.194 "aliases": [ 00:10:08.194 "89a3a687-9a46-4ba0-acc6-48e6da64811c" 00:10:08.194 ], 00:10:08.194 "product_name": "Malloc disk", 00:10:08.194 "block_size": 512, 00:10:08.194 "num_blocks": 65536, 00:10:08.194 "uuid": "89a3a687-9a46-4ba0-acc6-48e6da64811c", 00:10:08.194 "assigned_rate_limits": { 00:10:08.194 "rw_ios_per_sec": 0, 00:10:08.194 "rw_mbytes_per_sec": 0, 00:10:08.194 "r_mbytes_per_sec": 0, 00:10:08.194 "w_mbytes_per_sec": 0 00:10:08.194 }, 00:10:08.194 "claimed": true, 00:10:08.194 "claim_type": "exclusive_write", 00:10:08.194 "zoned": false, 00:10:08.194 "supported_io_types": { 00:10:08.194 "read": true, 00:10:08.194 "write": true, 00:10:08.194 "unmap": true, 00:10:08.194 "flush": true, 00:10:08.194 "reset": true, 00:10:08.194 "nvme_admin": false, 00:10:08.194 "nvme_io": false, 00:10:08.194 "nvme_io_md": false, 00:10:08.194 "write_zeroes": true, 00:10:08.194 "zcopy": true, 00:10:08.194 "get_zone_info": false, 00:10:08.194 "zone_management": false, 00:10:08.194 "zone_append": false, 00:10:08.194 "compare": false, 00:10:08.194 "compare_and_write": false, 00:10:08.194 "abort": true, 00:10:08.194 "seek_hole": false, 00:10:08.194 "seek_data": false, 00:10:08.194 "copy": true, 00:10:08.194 "nvme_iov_md": false 00:10:08.194 }, 00:10:08.194 "memory_domains": [ 00:10:08.194 { 00:10:08.195 "dma_device_id": "system", 00:10:08.195 "dma_device_type": 1 00:10:08.195 }, 00:10:08.195 { 00:10:08.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.195 "dma_device_type": 2 00:10:08.195 } 00:10:08.195 ], 00:10:08.195 "driver_specific": {} 00:10:08.195 } 00:10:08.195 ] 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.195 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.455 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.455 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.455 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.455 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.455 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.455 "name": "Existed_Raid", 00:10:08.455 "uuid": "08d804d8-7b65-49e7-93b8-764fad3e1edb", 00:10:08.455 "strip_size_kb": 64, 00:10:08.455 "state": "online", 00:10:08.455 "raid_level": "concat", 00:10:08.455 "superblock": true, 00:10:08.455 "num_base_bdevs": 4, 00:10:08.455 "num_base_bdevs_discovered": 4, 00:10:08.455 "num_base_bdevs_operational": 4, 00:10:08.455 "base_bdevs_list": [ 00:10:08.455 { 00:10:08.455 "name": "BaseBdev1", 00:10:08.455 "uuid": "8ad602e3-412f-45ae-937a-9b57f713170f", 00:10:08.455 "is_configured": true, 00:10:08.455 "data_offset": 2048, 00:10:08.455 "data_size": 63488 00:10:08.455 }, 00:10:08.455 { 00:10:08.455 "name": "BaseBdev2", 00:10:08.455 "uuid": "cba135b3-ec6e-4163-9c0e-4637bf99fb74", 00:10:08.455 "is_configured": true, 00:10:08.455 "data_offset": 2048, 00:10:08.455 "data_size": 63488 00:10:08.455 }, 00:10:08.455 { 00:10:08.455 "name": "BaseBdev3", 00:10:08.455 "uuid": "e2b3893a-993e-43e9-8388-d7327fc4a0cd", 00:10:08.455 "is_configured": true, 00:10:08.455 "data_offset": 2048, 00:10:08.455 "data_size": 63488 00:10:08.455 }, 00:10:08.455 { 00:10:08.455 "name": "BaseBdev4", 00:10:08.455 "uuid": "89a3a687-9a46-4ba0-acc6-48e6da64811c", 00:10:08.455 "is_configured": true, 00:10:08.455 "data_offset": 2048, 00:10:08.455 "data_size": 63488 00:10:08.455 } 00:10:08.455 ] 00:10:08.455 }' 00:10:08.455 15:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.455 15:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.715 [2024-11-26 15:26:07.086362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.715 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.715 "name": "Existed_Raid", 00:10:08.715 "aliases": [ 00:10:08.715 "08d804d8-7b65-49e7-93b8-764fad3e1edb" 00:10:08.715 ], 00:10:08.715 "product_name": "Raid Volume", 00:10:08.715 "block_size": 512, 00:10:08.715 "num_blocks": 253952, 00:10:08.715 "uuid": "08d804d8-7b65-49e7-93b8-764fad3e1edb", 00:10:08.715 "assigned_rate_limits": { 00:10:08.715 "rw_ios_per_sec": 0, 00:10:08.715 "rw_mbytes_per_sec": 0, 00:10:08.715 "r_mbytes_per_sec": 0, 00:10:08.715 "w_mbytes_per_sec": 0 00:10:08.715 }, 00:10:08.715 "claimed": false, 00:10:08.715 "zoned": false, 00:10:08.715 "supported_io_types": { 00:10:08.715 "read": true, 00:10:08.715 "write": true, 00:10:08.715 "unmap": true, 00:10:08.715 "flush": true, 00:10:08.715 "reset": true, 00:10:08.715 "nvme_admin": false, 00:10:08.715 "nvme_io": false, 00:10:08.715 "nvme_io_md": false, 00:10:08.715 "write_zeroes": true, 00:10:08.715 "zcopy": false, 00:10:08.715 "get_zone_info": false, 00:10:08.715 "zone_management": false, 00:10:08.715 "zone_append": false, 00:10:08.715 "compare": false, 00:10:08.715 "compare_and_write": false, 00:10:08.715 "abort": false, 00:10:08.715 "seek_hole": false, 00:10:08.715 "seek_data": false, 00:10:08.715 "copy": false, 00:10:08.715 "nvme_iov_md": false 00:10:08.715 }, 00:10:08.715 "memory_domains": [ 00:10:08.715 { 00:10:08.715 "dma_device_id": "system", 00:10:08.715 "dma_device_type": 1 00:10:08.715 }, 00:10:08.715 { 00:10:08.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.715 "dma_device_type": 2 00:10:08.715 }, 00:10:08.715 { 00:10:08.715 "dma_device_id": "system", 00:10:08.715 "dma_device_type": 1 00:10:08.715 }, 00:10:08.715 { 00:10:08.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.715 "dma_device_type": 2 00:10:08.715 }, 00:10:08.715 { 00:10:08.715 "dma_device_id": "system", 00:10:08.715 "dma_device_type": 1 00:10:08.715 }, 00:10:08.715 { 00:10:08.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.715 "dma_device_type": 2 00:10:08.715 }, 00:10:08.715 { 00:10:08.715 "dma_device_id": "system", 00:10:08.715 "dma_device_type": 1 00:10:08.715 }, 00:10:08.715 { 00:10:08.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.715 "dma_device_type": 2 00:10:08.715 } 00:10:08.715 ], 00:10:08.715 "driver_specific": { 00:10:08.715 "raid": { 00:10:08.715 "uuid": "08d804d8-7b65-49e7-93b8-764fad3e1edb", 00:10:08.715 "strip_size_kb": 64, 00:10:08.715 "state": "online", 00:10:08.716 "raid_level": "concat", 00:10:08.716 "superblock": true, 00:10:08.716 "num_base_bdevs": 4, 00:10:08.716 "num_base_bdevs_discovered": 4, 00:10:08.716 "num_base_bdevs_operational": 4, 00:10:08.716 "base_bdevs_list": [ 00:10:08.716 { 00:10:08.716 "name": "BaseBdev1", 00:10:08.716 "uuid": "8ad602e3-412f-45ae-937a-9b57f713170f", 00:10:08.716 "is_configured": true, 00:10:08.716 "data_offset": 2048, 00:10:08.716 "data_size": 63488 00:10:08.716 }, 00:10:08.716 { 00:10:08.716 "name": "BaseBdev2", 00:10:08.716 "uuid": "cba135b3-ec6e-4163-9c0e-4637bf99fb74", 00:10:08.716 "is_configured": true, 00:10:08.716 "data_offset": 2048, 00:10:08.716 "data_size": 63488 00:10:08.716 }, 00:10:08.716 { 00:10:08.716 "name": "BaseBdev3", 00:10:08.716 "uuid": "e2b3893a-993e-43e9-8388-d7327fc4a0cd", 00:10:08.716 "is_configured": true, 00:10:08.716 "data_offset": 2048, 00:10:08.716 "data_size": 63488 00:10:08.716 }, 00:10:08.716 { 00:10:08.716 "name": "BaseBdev4", 00:10:08.716 "uuid": "89a3a687-9a46-4ba0-acc6-48e6da64811c", 00:10:08.716 "is_configured": true, 00:10:08.716 "data_offset": 2048, 00:10:08.716 "data_size": 63488 00:10:08.716 } 00:10:08.716 ] 00:10:08.716 } 00:10:08.716 } 00:10:08.716 }' 00:10:08.716 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.716 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:08.716 BaseBdev2 00:10:08.716 BaseBdev3 00:10:08.716 BaseBdev4' 00:10:08.716 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:08.976 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.977 [2024-11-26 15:26:07.410192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.977 [2024-11-26 15:26:07.410225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.977 [2024-11-26 15:26:07.410280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.977 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.237 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.237 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.237 "name": "Existed_Raid", 00:10:09.237 "uuid": "08d804d8-7b65-49e7-93b8-764fad3e1edb", 00:10:09.237 "strip_size_kb": 64, 00:10:09.237 "state": "offline", 00:10:09.237 "raid_level": "concat", 00:10:09.237 "superblock": true, 00:10:09.237 "num_base_bdevs": 4, 00:10:09.238 "num_base_bdevs_discovered": 3, 00:10:09.238 "num_base_bdevs_operational": 3, 00:10:09.238 "base_bdevs_list": [ 00:10:09.238 { 00:10:09.238 "name": null, 00:10:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.238 "is_configured": false, 00:10:09.238 "data_offset": 0, 00:10:09.238 "data_size": 63488 00:10:09.238 }, 00:10:09.238 { 00:10:09.238 "name": "BaseBdev2", 00:10:09.238 "uuid": "cba135b3-ec6e-4163-9c0e-4637bf99fb74", 00:10:09.238 "is_configured": true, 00:10:09.238 "data_offset": 2048, 00:10:09.238 "data_size": 63488 00:10:09.238 }, 00:10:09.238 { 00:10:09.238 "name": "BaseBdev3", 00:10:09.238 "uuid": "e2b3893a-993e-43e9-8388-d7327fc4a0cd", 00:10:09.238 "is_configured": true, 00:10:09.238 "data_offset": 2048, 00:10:09.238 "data_size": 63488 00:10:09.238 }, 00:10:09.238 { 00:10:09.238 "name": "BaseBdev4", 00:10:09.238 "uuid": "89a3a687-9a46-4ba0-acc6-48e6da64811c", 00:10:09.238 "is_configured": true, 00:10:09.238 "data_offset": 2048, 00:10:09.238 "data_size": 63488 00:10:09.238 } 00:10:09.238 ] 00:10:09.238 }' 00:10:09.238 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.238 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.497 [2024-11-26 15:26:07.877594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.497 [2024-11-26 15:26:07.944780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.497 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.758 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.758 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.758 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.758 15:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:09.758 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.758 15:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.758 [2024-11-26 15:26:07.991822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:09.758 [2024-11-26 15:26:07.991922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.758 BaseBdev2 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.758 [ 00:10:09.758 { 00:10:09.758 "name": "BaseBdev2", 00:10:09.758 "aliases": [ 00:10:09.758 "95467463-256b-488f-bd38-920dceba699c" 00:10:09.758 ], 00:10:09.758 "product_name": "Malloc disk", 00:10:09.758 "block_size": 512, 00:10:09.758 "num_blocks": 65536, 00:10:09.758 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:09.758 "assigned_rate_limits": { 00:10:09.758 "rw_ios_per_sec": 0, 00:10:09.758 "rw_mbytes_per_sec": 0, 00:10:09.758 "r_mbytes_per_sec": 0, 00:10:09.758 "w_mbytes_per_sec": 0 00:10:09.758 }, 00:10:09.758 "claimed": false, 00:10:09.758 "zoned": false, 00:10:09.758 "supported_io_types": { 00:10:09.758 "read": true, 00:10:09.758 "write": true, 00:10:09.758 "unmap": true, 00:10:09.758 "flush": true, 00:10:09.758 "reset": true, 00:10:09.758 "nvme_admin": false, 00:10:09.758 "nvme_io": false, 00:10:09.758 "nvme_io_md": false, 00:10:09.758 "write_zeroes": true, 00:10:09.758 "zcopy": true, 00:10:09.758 "get_zone_info": false, 00:10:09.758 "zone_management": false, 00:10:09.758 "zone_append": false, 00:10:09.758 "compare": false, 00:10:09.758 "compare_and_write": false, 00:10:09.758 "abort": true, 00:10:09.758 "seek_hole": false, 00:10:09.758 "seek_data": false, 00:10:09.758 "copy": true, 00:10:09.758 "nvme_iov_md": false 00:10:09.758 }, 00:10:09.758 "memory_domains": [ 00:10:09.758 { 00:10:09.758 "dma_device_id": "system", 00:10:09.758 "dma_device_type": 1 00:10:09.758 }, 00:10:09.758 { 00:10:09.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.758 "dma_device_type": 2 00:10:09.758 } 00:10:09.758 ], 00:10:09.758 "driver_specific": {} 00:10:09.758 } 00:10:09.758 ] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.758 BaseBdev3 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.758 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 [ 00:10:09.759 { 00:10:09.759 "name": "BaseBdev3", 00:10:09.759 "aliases": [ 00:10:09.759 "a4082590-4e73-404f-b5eb-cc11dbd30514" 00:10:09.759 ], 00:10:09.759 "product_name": "Malloc disk", 00:10:09.759 "block_size": 512, 00:10:09.759 "num_blocks": 65536, 00:10:09.759 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:09.759 "assigned_rate_limits": { 00:10:09.759 "rw_ios_per_sec": 0, 00:10:09.759 "rw_mbytes_per_sec": 0, 00:10:09.759 "r_mbytes_per_sec": 0, 00:10:09.759 "w_mbytes_per_sec": 0 00:10:09.759 }, 00:10:09.759 "claimed": false, 00:10:09.759 "zoned": false, 00:10:09.759 "supported_io_types": { 00:10:09.759 "read": true, 00:10:09.759 "write": true, 00:10:09.759 "unmap": true, 00:10:09.759 "flush": true, 00:10:09.759 "reset": true, 00:10:09.759 "nvme_admin": false, 00:10:09.759 "nvme_io": false, 00:10:09.759 "nvme_io_md": false, 00:10:09.759 "write_zeroes": true, 00:10:09.759 "zcopy": true, 00:10:09.759 "get_zone_info": false, 00:10:09.759 "zone_management": false, 00:10:09.759 "zone_append": false, 00:10:09.759 "compare": false, 00:10:09.759 "compare_and_write": false, 00:10:09.759 "abort": true, 00:10:09.759 "seek_hole": false, 00:10:09.759 "seek_data": false, 00:10:09.759 "copy": true, 00:10:09.759 "nvme_iov_md": false 00:10:09.759 }, 00:10:09.759 "memory_domains": [ 00:10:09.759 { 00:10:09.759 "dma_device_id": "system", 00:10:09.759 "dma_device_type": 1 00:10:09.759 }, 00:10:09.759 { 00:10:09.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.759 "dma_device_type": 2 00:10:09.759 } 00:10:09.759 ], 00:10:09.759 "driver_specific": {} 00:10:09.759 } 00:10:09.759 ] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 BaseBdev4 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 [ 00:10:09.759 { 00:10:09.759 "name": "BaseBdev4", 00:10:09.759 "aliases": [ 00:10:09.759 "0e6ec6a2-aff7-46ba-b818-a45f758efb57" 00:10:09.759 ], 00:10:09.759 "product_name": "Malloc disk", 00:10:09.759 "block_size": 512, 00:10:09.759 "num_blocks": 65536, 00:10:09.759 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:09.759 "assigned_rate_limits": { 00:10:09.759 "rw_ios_per_sec": 0, 00:10:09.759 "rw_mbytes_per_sec": 0, 00:10:09.759 "r_mbytes_per_sec": 0, 00:10:09.759 "w_mbytes_per_sec": 0 00:10:09.759 }, 00:10:09.759 "claimed": false, 00:10:09.759 "zoned": false, 00:10:09.759 "supported_io_types": { 00:10:09.759 "read": true, 00:10:09.759 "write": true, 00:10:09.759 "unmap": true, 00:10:09.759 "flush": true, 00:10:09.759 "reset": true, 00:10:09.759 "nvme_admin": false, 00:10:09.759 "nvme_io": false, 00:10:09.759 "nvme_io_md": false, 00:10:09.759 "write_zeroes": true, 00:10:09.759 "zcopy": true, 00:10:09.759 "get_zone_info": false, 00:10:09.759 "zone_management": false, 00:10:09.759 "zone_append": false, 00:10:09.759 "compare": false, 00:10:09.759 "compare_and_write": false, 00:10:09.759 "abort": true, 00:10:09.759 "seek_hole": false, 00:10:09.759 "seek_data": false, 00:10:09.759 "copy": true, 00:10:09.759 "nvme_iov_md": false 00:10:09.759 }, 00:10:09.759 "memory_domains": [ 00:10:09.759 { 00:10:09.759 "dma_device_id": "system", 00:10:09.759 "dma_device_type": 1 00:10:09.759 }, 00:10:09.759 { 00:10:09.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.759 "dma_device_type": 2 00:10:09.759 } 00:10:09.759 ], 00:10:09.759 "driver_specific": {} 00:10:09.759 } 00:10:09.759 ] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 [2024-11-26 15:26:08.220756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.759 [2024-11-26 15:26:08.220846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.759 [2024-11-26 15:26:08.220900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.759 [2024-11-26 15:26:08.222701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.759 [2024-11-26 15:26:08.222798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.759 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.019 "name": "Existed_Raid", 00:10:10.019 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:10.019 "strip_size_kb": 64, 00:10:10.019 "state": "configuring", 00:10:10.019 "raid_level": "concat", 00:10:10.019 "superblock": true, 00:10:10.019 "num_base_bdevs": 4, 00:10:10.019 "num_base_bdevs_discovered": 3, 00:10:10.019 "num_base_bdevs_operational": 4, 00:10:10.019 "base_bdevs_list": [ 00:10:10.019 { 00:10:10.019 "name": "BaseBdev1", 00:10:10.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.019 "is_configured": false, 00:10:10.019 "data_offset": 0, 00:10:10.019 "data_size": 0 00:10:10.019 }, 00:10:10.019 { 00:10:10.019 "name": "BaseBdev2", 00:10:10.019 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:10.019 "is_configured": true, 00:10:10.019 "data_offset": 2048, 00:10:10.019 "data_size": 63488 00:10:10.019 }, 00:10:10.019 { 00:10:10.019 "name": "BaseBdev3", 00:10:10.019 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:10.019 "is_configured": true, 00:10:10.019 "data_offset": 2048, 00:10:10.019 "data_size": 63488 00:10:10.019 }, 00:10:10.019 { 00:10:10.019 "name": "BaseBdev4", 00:10:10.019 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:10.019 "is_configured": true, 00:10:10.019 "data_offset": 2048, 00:10:10.019 "data_size": 63488 00:10:10.019 } 00:10:10.019 ] 00:10:10.019 }' 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.019 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.286 [2024-11-26 15:26:08.672866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.286 "name": "Existed_Raid", 00:10:10.286 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:10.286 "strip_size_kb": 64, 00:10:10.286 "state": "configuring", 00:10:10.286 "raid_level": "concat", 00:10:10.286 "superblock": true, 00:10:10.286 "num_base_bdevs": 4, 00:10:10.286 "num_base_bdevs_discovered": 2, 00:10:10.286 "num_base_bdevs_operational": 4, 00:10:10.286 "base_bdevs_list": [ 00:10:10.286 { 00:10:10.286 "name": "BaseBdev1", 00:10:10.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.286 "is_configured": false, 00:10:10.286 "data_offset": 0, 00:10:10.286 "data_size": 0 00:10:10.286 }, 00:10:10.286 { 00:10:10.286 "name": null, 00:10:10.286 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:10.286 "is_configured": false, 00:10:10.286 "data_offset": 0, 00:10:10.286 "data_size": 63488 00:10:10.286 }, 00:10:10.286 { 00:10:10.286 "name": "BaseBdev3", 00:10:10.286 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:10.286 "is_configured": true, 00:10:10.286 "data_offset": 2048, 00:10:10.286 "data_size": 63488 00:10:10.286 }, 00:10:10.286 { 00:10:10.286 "name": "BaseBdev4", 00:10:10.286 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:10.286 "is_configured": true, 00:10:10.286 "data_offset": 2048, 00:10:10.286 "data_size": 63488 00:10:10.286 } 00:10:10.286 ] 00:10:10.286 }' 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.286 15:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.867 BaseBdev1 00:10:10.867 [2024-11-26 15:26:09.167966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.867 [ 00:10:10.867 { 00:10:10.867 "name": "BaseBdev1", 00:10:10.867 "aliases": [ 00:10:10.867 "d7299e76-0e0c-4058-8db9-55868b75f90b" 00:10:10.867 ], 00:10:10.867 "product_name": "Malloc disk", 00:10:10.867 "block_size": 512, 00:10:10.867 "num_blocks": 65536, 00:10:10.867 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:10.867 "assigned_rate_limits": { 00:10:10.867 "rw_ios_per_sec": 0, 00:10:10.867 "rw_mbytes_per_sec": 0, 00:10:10.867 "r_mbytes_per_sec": 0, 00:10:10.867 "w_mbytes_per_sec": 0 00:10:10.867 }, 00:10:10.867 "claimed": true, 00:10:10.867 "claim_type": "exclusive_write", 00:10:10.867 "zoned": false, 00:10:10.867 "supported_io_types": { 00:10:10.867 "read": true, 00:10:10.867 "write": true, 00:10:10.867 "unmap": true, 00:10:10.867 "flush": true, 00:10:10.867 "reset": true, 00:10:10.867 "nvme_admin": false, 00:10:10.867 "nvme_io": false, 00:10:10.867 "nvme_io_md": false, 00:10:10.867 "write_zeroes": true, 00:10:10.867 "zcopy": true, 00:10:10.867 "get_zone_info": false, 00:10:10.867 "zone_management": false, 00:10:10.867 "zone_append": false, 00:10:10.867 "compare": false, 00:10:10.867 "compare_and_write": false, 00:10:10.867 "abort": true, 00:10:10.867 "seek_hole": false, 00:10:10.867 "seek_data": false, 00:10:10.867 "copy": true, 00:10:10.867 "nvme_iov_md": false 00:10:10.867 }, 00:10:10.867 "memory_domains": [ 00:10:10.867 { 00:10:10.867 "dma_device_id": "system", 00:10:10.867 "dma_device_type": 1 00:10:10.867 }, 00:10:10.867 { 00:10:10.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.867 "dma_device_type": 2 00:10:10.867 } 00:10:10.867 ], 00:10:10.867 "driver_specific": {} 00:10:10.867 } 00:10:10.867 ] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.867 "name": "Existed_Raid", 00:10:10.867 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:10.867 "strip_size_kb": 64, 00:10:10.867 "state": "configuring", 00:10:10.867 "raid_level": "concat", 00:10:10.867 "superblock": true, 00:10:10.867 "num_base_bdevs": 4, 00:10:10.867 "num_base_bdevs_discovered": 3, 00:10:10.867 "num_base_bdevs_operational": 4, 00:10:10.867 "base_bdevs_list": [ 00:10:10.867 { 00:10:10.867 "name": "BaseBdev1", 00:10:10.867 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:10.867 "is_configured": true, 00:10:10.867 "data_offset": 2048, 00:10:10.867 "data_size": 63488 00:10:10.867 }, 00:10:10.867 { 00:10:10.867 "name": null, 00:10:10.867 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:10.867 "is_configured": false, 00:10:10.867 "data_offset": 0, 00:10:10.867 "data_size": 63488 00:10:10.867 }, 00:10:10.867 { 00:10:10.867 "name": "BaseBdev3", 00:10:10.867 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:10.867 "is_configured": true, 00:10:10.867 "data_offset": 2048, 00:10:10.867 "data_size": 63488 00:10:10.867 }, 00:10:10.867 { 00:10:10.867 "name": "BaseBdev4", 00:10:10.867 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:10.867 "is_configured": true, 00:10:10.867 "data_offset": 2048, 00:10:10.867 "data_size": 63488 00:10:10.867 } 00:10:10.867 ] 00:10:10.867 }' 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.867 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.436 [2024-11-26 15:26:09.676154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.436 "name": "Existed_Raid", 00:10:11.436 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:11.436 "strip_size_kb": 64, 00:10:11.436 "state": "configuring", 00:10:11.436 "raid_level": "concat", 00:10:11.436 "superblock": true, 00:10:11.436 "num_base_bdevs": 4, 00:10:11.436 "num_base_bdevs_discovered": 2, 00:10:11.436 "num_base_bdevs_operational": 4, 00:10:11.436 "base_bdevs_list": [ 00:10:11.436 { 00:10:11.436 "name": "BaseBdev1", 00:10:11.436 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:11.436 "is_configured": true, 00:10:11.436 "data_offset": 2048, 00:10:11.436 "data_size": 63488 00:10:11.436 }, 00:10:11.436 { 00:10:11.436 "name": null, 00:10:11.436 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:11.436 "is_configured": false, 00:10:11.436 "data_offset": 0, 00:10:11.436 "data_size": 63488 00:10:11.436 }, 00:10:11.436 { 00:10:11.436 "name": null, 00:10:11.436 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:11.436 "is_configured": false, 00:10:11.436 "data_offset": 0, 00:10:11.436 "data_size": 63488 00:10:11.436 }, 00:10:11.436 { 00:10:11.436 "name": "BaseBdev4", 00:10:11.436 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:11.436 "is_configured": true, 00:10:11.436 "data_offset": 2048, 00:10:11.436 "data_size": 63488 00:10:11.436 } 00:10:11.436 ] 00:10:11.436 }' 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.436 15:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.697 [2024-11-26 15:26:10.052296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.697 "name": "Existed_Raid", 00:10:11.697 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:11.697 "strip_size_kb": 64, 00:10:11.697 "state": "configuring", 00:10:11.697 "raid_level": "concat", 00:10:11.697 "superblock": true, 00:10:11.697 "num_base_bdevs": 4, 00:10:11.697 "num_base_bdevs_discovered": 3, 00:10:11.697 "num_base_bdevs_operational": 4, 00:10:11.697 "base_bdevs_list": [ 00:10:11.697 { 00:10:11.697 "name": "BaseBdev1", 00:10:11.697 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:11.697 "is_configured": true, 00:10:11.697 "data_offset": 2048, 00:10:11.697 "data_size": 63488 00:10:11.697 }, 00:10:11.697 { 00:10:11.697 "name": null, 00:10:11.697 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:11.697 "is_configured": false, 00:10:11.697 "data_offset": 0, 00:10:11.697 "data_size": 63488 00:10:11.697 }, 00:10:11.697 { 00:10:11.697 "name": "BaseBdev3", 00:10:11.697 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:11.697 "is_configured": true, 00:10:11.697 "data_offset": 2048, 00:10:11.697 "data_size": 63488 00:10:11.697 }, 00:10:11.697 { 00:10:11.697 "name": "BaseBdev4", 00:10:11.697 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:11.697 "is_configured": true, 00:10:11.697 "data_offset": 2048, 00:10:11.697 "data_size": 63488 00:10:11.697 } 00:10:11.697 ] 00:10:11.697 }' 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.697 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.266 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.266 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.266 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.266 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.267 [2024-11-26 15:26:10.528449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.267 "name": "Existed_Raid", 00:10:12.267 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:12.267 "strip_size_kb": 64, 00:10:12.267 "state": "configuring", 00:10:12.267 "raid_level": "concat", 00:10:12.267 "superblock": true, 00:10:12.267 "num_base_bdevs": 4, 00:10:12.267 "num_base_bdevs_discovered": 2, 00:10:12.267 "num_base_bdevs_operational": 4, 00:10:12.267 "base_bdevs_list": [ 00:10:12.267 { 00:10:12.267 "name": null, 00:10:12.267 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:12.267 "is_configured": false, 00:10:12.267 "data_offset": 0, 00:10:12.267 "data_size": 63488 00:10:12.267 }, 00:10:12.267 { 00:10:12.267 "name": null, 00:10:12.267 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:12.267 "is_configured": false, 00:10:12.267 "data_offset": 0, 00:10:12.267 "data_size": 63488 00:10:12.267 }, 00:10:12.267 { 00:10:12.267 "name": "BaseBdev3", 00:10:12.267 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:12.267 "is_configured": true, 00:10:12.267 "data_offset": 2048, 00:10:12.267 "data_size": 63488 00:10:12.267 }, 00:10:12.267 { 00:10:12.267 "name": "BaseBdev4", 00:10:12.267 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:12.267 "is_configured": true, 00:10:12.267 "data_offset": 2048, 00:10:12.267 "data_size": 63488 00:10:12.267 } 00:10:12.267 ] 00:10:12.267 }' 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.267 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.527 [2024-11-26 15:26:10.979152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.527 15:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.786 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.786 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.786 "name": "Existed_Raid", 00:10:12.786 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:12.786 "strip_size_kb": 64, 00:10:12.786 "state": "configuring", 00:10:12.786 "raid_level": "concat", 00:10:12.786 "superblock": true, 00:10:12.786 "num_base_bdevs": 4, 00:10:12.786 "num_base_bdevs_discovered": 3, 00:10:12.786 "num_base_bdevs_operational": 4, 00:10:12.786 "base_bdevs_list": [ 00:10:12.786 { 00:10:12.786 "name": null, 00:10:12.786 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:12.786 "is_configured": false, 00:10:12.786 "data_offset": 0, 00:10:12.786 "data_size": 63488 00:10:12.786 }, 00:10:12.786 { 00:10:12.786 "name": "BaseBdev2", 00:10:12.786 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:12.786 "is_configured": true, 00:10:12.786 "data_offset": 2048, 00:10:12.786 "data_size": 63488 00:10:12.786 }, 00:10:12.786 { 00:10:12.786 "name": "BaseBdev3", 00:10:12.786 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:12.786 "is_configured": true, 00:10:12.786 "data_offset": 2048, 00:10:12.786 "data_size": 63488 00:10:12.786 }, 00:10:12.786 { 00:10:12.786 "name": "BaseBdev4", 00:10:12.786 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:12.786 "is_configured": true, 00:10:12.787 "data_offset": 2048, 00:10:12.787 "data_size": 63488 00:10:12.787 } 00:10:12.787 ] 00:10:12.787 }' 00:10:12.787 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.787 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d7299e76-0e0c-4058-8db9-55868b75f90b 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.046 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.046 [2024-11-26 15:26:11.518446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:13.046 [2024-11-26 15:26:11.518686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.046 [2024-11-26 15:26:11.518744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.046 [2024-11-26 15:26:11.519016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:13.046 [2024-11-26 15:26:11.519172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.046 [2024-11-26 15:26:11.519223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:13.046 [2024-11-26 15:26:11.519366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.046 NewBaseBdev 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.306 [ 00:10:13.306 { 00:10:13.306 "name": "NewBaseBdev", 00:10:13.306 "aliases": [ 00:10:13.306 "d7299e76-0e0c-4058-8db9-55868b75f90b" 00:10:13.306 ], 00:10:13.306 "product_name": "Malloc disk", 00:10:13.306 "block_size": 512, 00:10:13.306 "num_blocks": 65536, 00:10:13.306 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:13.306 "assigned_rate_limits": { 00:10:13.306 "rw_ios_per_sec": 0, 00:10:13.306 "rw_mbytes_per_sec": 0, 00:10:13.306 "r_mbytes_per_sec": 0, 00:10:13.306 "w_mbytes_per_sec": 0 00:10:13.306 }, 00:10:13.306 "claimed": true, 00:10:13.306 "claim_type": "exclusive_write", 00:10:13.306 "zoned": false, 00:10:13.306 "supported_io_types": { 00:10:13.306 "read": true, 00:10:13.306 "write": true, 00:10:13.306 "unmap": true, 00:10:13.306 "flush": true, 00:10:13.306 "reset": true, 00:10:13.306 "nvme_admin": false, 00:10:13.306 "nvme_io": false, 00:10:13.306 "nvme_io_md": false, 00:10:13.306 "write_zeroes": true, 00:10:13.306 "zcopy": true, 00:10:13.306 "get_zone_info": false, 00:10:13.306 "zone_management": false, 00:10:13.306 "zone_append": false, 00:10:13.306 "compare": false, 00:10:13.306 "compare_and_write": false, 00:10:13.306 "abort": true, 00:10:13.306 "seek_hole": false, 00:10:13.306 "seek_data": false, 00:10:13.306 "copy": true, 00:10:13.306 "nvme_iov_md": false 00:10:13.306 }, 00:10:13.306 "memory_domains": [ 00:10:13.306 { 00:10:13.306 "dma_device_id": "system", 00:10:13.306 "dma_device_type": 1 00:10:13.306 }, 00:10:13.306 { 00:10:13.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.306 "dma_device_type": 2 00:10:13.306 } 00:10:13.306 ], 00:10:13.306 "driver_specific": {} 00:10:13.306 } 00:10:13.306 ] 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.306 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.306 "name": "Existed_Raid", 00:10:13.306 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:13.306 "strip_size_kb": 64, 00:10:13.306 "state": "online", 00:10:13.306 "raid_level": "concat", 00:10:13.306 "superblock": true, 00:10:13.306 "num_base_bdevs": 4, 00:10:13.306 "num_base_bdevs_discovered": 4, 00:10:13.306 "num_base_bdevs_operational": 4, 00:10:13.306 "base_bdevs_list": [ 00:10:13.306 { 00:10:13.306 "name": "NewBaseBdev", 00:10:13.307 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:13.307 "is_configured": true, 00:10:13.307 "data_offset": 2048, 00:10:13.307 "data_size": 63488 00:10:13.307 }, 00:10:13.307 { 00:10:13.307 "name": "BaseBdev2", 00:10:13.307 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:13.307 "is_configured": true, 00:10:13.307 "data_offset": 2048, 00:10:13.307 "data_size": 63488 00:10:13.307 }, 00:10:13.307 { 00:10:13.307 "name": "BaseBdev3", 00:10:13.307 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:13.307 "is_configured": true, 00:10:13.307 "data_offset": 2048, 00:10:13.307 "data_size": 63488 00:10:13.307 }, 00:10:13.307 { 00:10:13.307 "name": "BaseBdev4", 00:10:13.307 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:13.307 "is_configured": true, 00:10:13.307 "data_offset": 2048, 00:10:13.307 "data_size": 63488 00:10:13.307 } 00:10:13.307 ] 00:10:13.307 }' 00:10:13.307 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.307 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.567 15:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.567 [2024-11-26 15:26:11.998958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.567 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.567 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.567 "name": "Existed_Raid", 00:10:13.567 "aliases": [ 00:10:13.567 "4f2d6561-264f-45b7-a01d-6730559e5ebd" 00:10:13.567 ], 00:10:13.567 "product_name": "Raid Volume", 00:10:13.567 "block_size": 512, 00:10:13.567 "num_blocks": 253952, 00:10:13.567 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:13.567 "assigned_rate_limits": { 00:10:13.567 "rw_ios_per_sec": 0, 00:10:13.567 "rw_mbytes_per_sec": 0, 00:10:13.567 "r_mbytes_per_sec": 0, 00:10:13.567 "w_mbytes_per_sec": 0 00:10:13.567 }, 00:10:13.567 "claimed": false, 00:10:13.567 "zoned": false, 00:10:13.567 "supported_io_types": { 00:10:13.567 "read": true, 00:10:13.567 "write": true, 00:10:13.567 "unmap": true, 00:10:13.567 "flush": true, 00:10:13.567 "reset": true, 00:10:13.567 "nvme_admin": false, 00:10:13.567 "nvme_io": false, 00:10:13.567 "nvme_io_md": false, 00:10:13.567 "write_zeroes": true, 00:10:13.567 "zcopy": false, 00:10:13.567 "get_zone_info": false, 00:10:13.567 "zone_management": false, 00:10:13.567 "zone_append": false, 00:10:13.567 "compare": false, 00:10:13.567 "compare_and_write": false, 00:10:13.567 "abort": false, 00:10:13.567 "seek_hole": false, 00:10:13.567 "seek_data": false, 00:10:13.567 "copy": false, 00:10:13.567 "nvme_iov_md": false 00:10:13.567 }, 00:10:13.567 "memory_domains": [ 00:10:13.567 { 00:10:13.567 "dma_device_id": "system", 00:10:13.567 "dma_device_type": 1 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.567 "dma_device_type": 2 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "dma_device_id": "system", 00:10:13.567 "dma_device_type": 1 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.567 "dma_device_type": 2 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "dma_device_id": "system", 00:10:13.567 "dma_device_type": 1 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.567 "dma_device_type": 2 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "dma_device_id": "system", 00:10:13.567 "dma_device_type": 1 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.567 "dma_device_type": 2 00:10:13.567 } 00:10:13.567 ], 00:10:13.567 "driver_specific": { 00:10:13.567 "raid": { 00:10:13.567 "uuid": "4f2d6561-264f-45b7-a01d-6730559e5ebd", 00:10:13.567 "strip_size_kb": 64, 00:10:13.567 "state": "online", 00:10:13.567 "raid_level": "concat", 00:10:13.567 "superblock": true, 00:10:13.567 "num_base_bdevs": 4, 00:10:13.567 "num_base_bdevs_discovered": 4, 00:10:13.567 "num_base_bdevs_operational": 4, 00:10:13.567 "base_bdevs_list": [ 00:10:13.567 { 00:10:13.567 "name": "NewBaseBdev", 00:10:13.567 "uuid": "d7299e76-0e0c-4058-8db9-55868b75f90b", 00:10:13.567 "is_configured": true, 00:10:13.567 "data_offset": 2048, 00:10:13.567 "data_size": 63488 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "name": "BaseBdev2", 00:10:13.567 "uuid": "95467463-256b-488f-bd38-920dceba699c", 00:10:13.567 "is_configured": true, 00:10:13.567 "data_offset": 2048, 00:10:13.567 "data_size": 63488 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "name": "BaseBdev3", 00:10:13.567 "uuid": "a4082590-4e73-404f-b5eb-cc11dbd30514", 00:10:13.567 "is_configured": true, 00:10:13.567 "data_offset": 2048, 00:10:13.567 "data_size": 63488 00:10:13.567 }, 00:10:13.567 { 00:10:13.567 "name": "BaseBdev4", 00:10:13.567 "uuid": "0e6ec6a2-aff7-46ba-b818-a45f758efb57", 00:10:13.567 "is_configured": true, 00:10:13.567 "data_offset": 2048, 00:10:13.567 "data_size": 63488 00:10:13.567 } 00:10:13.567 ] 00:10:13.567 } 00:10:13.567 } 00:10:13.567 }' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.828 BaseBdev2 00:10:13.828 BaseBdev3 00:10:13.828 BaseBdev4' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.828 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.088 [2024-11-26 15:26:12.326714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.088 [2024-11-26 15:26:12.326794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.088 [2024-11-26 15:26:12.326889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.088 [2024-11-26 15:26:12.326957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.088 [2024-11-26 15:26:12.326970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84379 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84379 ']' 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84379 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84379 00:10:14.088 killing process with pid 84379 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84379' 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84379 00:10:14.088 [2024-11-26 15:26:12.373993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.088 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84379 00:10:14.088 [2024-11-26 15:26:12.414385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.347 15:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:14.347 ************************************ 00:10:14.347 END TEST raid_state_function_test_sb 00:10:14.347 ************************************ 00:10:14.347 00:10:14.347 real 0m9.268s 00:10:14.347 user 0m15.861s 00:10:14.347 sys 0m1.896s 00:10:14.347 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.347 15:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.347 15:26:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:14.347 15:26:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:14.347 15:26:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.347 15:26:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.347 ************************************ 00:10:14.347 START TEST raid_superblock_test 00:10:14.347 ************************************ 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85027 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85027 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85027 ']' 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.347 15:26:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.347 [2024-11-26 15:26:12.770301] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:14.347 [2024-11-26 15:26:12.770505] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85027 ] 00:10:14.606 [2024-11-26 15:26:12.905221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:14.606 [2024-11-26 15:26:12.943281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.606 [2024-11-26 15:26:12.969545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.606 [2024-11-26 15:26:13.012505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.606 [2024-11-26 15:26:13.012632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.175 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.175 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.175 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.176 malloc1 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.176 [2024-11-26 15:26:13.604562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:15.176 [2024-11-26 15:26:13.604629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.176 [2024-11-26 15:26:13.604656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:15.176 [2024-11-26 15:26:13.604669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.176 [2024-11-26 15:26:13.606837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.176 [2024-11-26 15:26:13.606911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:15.176 pt1 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.176 malloc2 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.176 [2024-11-26 15:26:13.633107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.176 [2024-11-26 15:26:13.633157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.176 [2024-11-26 15:26:13.633173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:15.176 [2024-11-26 15:26:13.633197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.176 [2024-11-26 15:26:13.635234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.176 [2024-11-26 15:26:13.635265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.176 pt2 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.176 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 malloc3 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 [2024-11-26 15:26:13.661606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.435 [2024-11-26 15:26:13.661707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.435 [2024-11-26 15:26:13.661743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:15.435 [2024-11-26 15:26:13.661770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.435 [2024-11-26 15:26:13.663825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.435 [2024-11-26 15:26:13.663890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.435 pt3 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 malloc4 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 [2024-11-26 15:26:13.705522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:15.435 [2024-11-26 15:26:13.705608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.435 [2024-11-26 15:26:13.705645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:15.435 [2024-11-26 15:26:13.705675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.435 [2024-11-26 15:26:13.707768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.435 [2024-11-26 15:26:13.707830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:15.435 pt4 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.435 [2024-11-26 15:26:13.717593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:15.435 [2024-11-26 15:26:13.719458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.435 [2024-11-26 15:26:13.719524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.435 [2024-11-26 15:26:13.719585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:15.435 [2024-11-26 15:26:13.719733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:15.435 [2024-11-26 15:26:13.719744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.435 [2024-11-26 15:26:13.719970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:15.435 [2024-11-26 15:26:13.720106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:15.435 [2024-11-26 15:26:13.720123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:15.435 [2024-11-26 15:26:13.720254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.435 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.436 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.436 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.436 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.436 "name": "raid_bdev1", 00:10:15.436 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:15.436 "strip_size_kb": 64, 00:10:15.436 "state": "online", 00:10:15.436 "raid_level": "concat", 00:10:15.436 "superblock": true, 00:10:15.436 "num_base_bdevs": 4, 00:10:15.436 "num_base_bdevs_discovered": 4, 00:10:15.436 "num_base_bdevs_operational": 4, 00:10:15.436 "base_bdevs_list": [ 00:10:15.436 { 00:10:15.436 "name": "pt1", 00:10:15.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.436 "is_configured": true, 00:10:15.436 "data_offset": 2048, 00:10:15.436 "data_size": 63488 00:10:15.436 }, 00:10:15.436 { 00:10:15.436 "name": "pt2", 00:10:15.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.436 "is_configured": true, 00:10:15.436 "data_offset": 2048, 00:10:15.436 "data_size": 63488 00:10:15.436 }, 00:10:15.436 { 00:10:15.436 "name": "pt3", 00:10:15.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.436 "is_configured": true, 00:10:15.436 "data_offset": 2048, 00:10:15.436 "data_size": 63488 00:10:15.436 }, 00:10:15.436 { 00:10:15.436 "name": "pt4", 00:10:15.436 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.436 "is_configured": true, 00:10:15.436 "data_offset": 2048, 00:10:15.436 "data_size": 63488 00:10:15.436 } 00:10:15.436 ] 00:10:15.436 }' 00:10:15.436 15:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.436 15:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.695 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.695 [2024-11-26 15:26:14.149980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.955 "name": "raid_bdev1", 00:10:15.955 "aliases": [ 00:10:15.955 "799dae47-95ae-43b2-8566-c1c22ad0f26e" 00:10:15.955 ], 00:10:15.955 "product_name": "Raid Volume", 00:10:15.955 "block_size": 512, 00:10:15.955 "num_blocks": 253952, 00:10:15.955 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:15.955 "assigned_rate_limits": { 00:10:15.955 "rw_ios_per_sec": 0, 00:10:15.955 "rw_mbytes_per_sec": 0, 00:10:15.955 "r_mbytes_per_sec": 0, 00:10:15.955 "w_mbytes_per_sec": 0 00:10:15.955 }, 00:10:15.955 "claimed": false, 00:10:15.955 "zoned": false, 00:10:15.955 "supported_io_types": { 00:10:15.955 "read": true, 00:10:15.955 "write": true, 00:10:15.955 "unmap": true, 00:10:15.955 "flush": true, 00:10:15.955 "reset": true, 00:10:15.955 "nvme_admin": false, 00:10:15.955 "nvme_io": false, 00:10:15.955 "nvme_io_md": false, 00:10:15.955 "write_zeroes": true, 00:10:15.955 "zcopy": false, 00:10:15.955 "get_zone_info": false, 00:10:15.955 "zone_management": false, 00:10:15.955 "zone_append": false, 00:10:15.955 "compare": false, 00:10:15.955 "compare_and_write": false, 00:10:15.955 "abort": false, 00:10:15.955 "seek_hole": false, 00:10:15.955 "seek_data": false, 00:10:15.955 "copy": false, 00:10:15.955 "nvme_iov_md": false 00:10:15.955 }, 00:10:15.955 "memory_domains": [ 00:10:15.955 { 00:10:15.955 "dma_device_id": "system", 00:10:15.955 "dma_device_type": 1 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.955 "dma_device_type": 2 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "dma_device_id": "system", 00:10:15.955 "dma_device_type": 1 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.955 "dma_device_type": 2 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "dma_device_id": "system", 00:10:15.955 "dma_device_type": 1 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.955 "dma_device_type": 2 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "dma_device_id": "system", 00:10:15.955 "dma_device_type": 1 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.955 "dma_device_type": 2 00:10:15.955 } 00:10:15.955 ], 00:10:15.955 "driver_specific": { 00:10:15.955 "raid": { 00:10:15.955 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:15.955 "strip_size_kb": 64, 00:10:15.955 "state": "online", 00:10:15.955 "raid_level": "concat", 00:10:15.955 "superblock": true, 00:10:15.955 "num_base_bdevs": 4, 00:10:15.955 "num_base_bdevs_discovered": 4, 00:10:15.955 "num_base_bdevs_operational": 4, 00:10:15.955 "base_bdevs_list": [ 00:10:15.955 { 00:10:15.955 "name": "pt1", 00:10:15.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.955 "is_configured": true, 00:10:15.955 "data_offset": 2048, 00:10:15.955 "data_size": 63488 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "name": "pt2", 00:10:15.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.955 "is_configured": true, 00:10:15.955 "data_offset": 2048, 00:10:15.955 "data_size": 63488 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "name": "pt3", 00:10:15.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.955 "is_configured": true, 00:10:15.955 "data_offset": 2048, 00:10:15.955 "data_size": 63488 00:10:15.955 }, 00:10:15.955 { 00:10:15.955 "name": "pt4", 00:10:15.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.955 "is_configured": true, 00:10:15.955 "data_offset": 2048, 00:10:15.955 "data_size": 63488 00:10:15.955 } 00:10:15.955 ] 00:10:15.955 } 00:10:15.955 } 00:10:15.955 }' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.955 pt2 00:10:15.955 pt3 00:10:15.955 pt4' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:15.955 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.956 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.956 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 [2024-11-26 15:26:14.462053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=799dae47-95ae-43b2-8566-c1c22ad0f26e 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 799dae47-95ae-43b2-8566-c1c22ad0f26e ']' 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 [2024-11-26 15:26:14.493736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.216 [2024-11-26 15:26:14.493801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.216 [2024-11-26 15:26:14.493907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.216 [2024-11-26 15:26:14.494008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.216 [2024-11-26 15:26:14.494061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.216 [2024-11-26 15:26:14.649819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:16.216 [2024-11-26 15:26:14.651612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:16.216 [2024-11-26 15:26:14.651706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:16.216 [2024-11-26 15:26:14.651741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:16.216 [2024-11-26 15:26:14.651784] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:16.216 [2024-11-26 15:26:14.651837] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:16.216 [2024-11-26 15:26:14.651856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:16.216 [2024-11-26 15:26:14.651873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:16.216 [2024-11-26 15:26:14.651884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.216 [2024-11-26 15:26:14.651895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:16.216 request: 00:10:16.216 { 00:10:16.216 "name": "raid_bdev1", 00:10:16.216 "raid_level": "concat", 00:10:16.216 "base_bdevs": [ 00:10:16.216 "malloc1", 00:10:16.216 "malloc2", 00:10:16.216 "malloc3", 00:10:16.216 "malloc4" 00:10:16.216 ], 00:10:16.216 "strip_size_kb": 64, 00:10:16.216 "superblock": false, 00:10:16.216 "method": "bdev_raid_create", 00:10:16.216 "req_id": 1 00:10:16.216 } 00:10:16.216 Got JSON-RPC error response 00:10:16.216 response: 00:10:16.216 { 00:10:16.216 "code": -17, 00:10:16.216 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:16.216 } 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:16.216 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:16.217 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.476 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:16.476 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.477 [2024-11-26 15:26:14.713804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.477 [2024-11-26 15:26:14.713895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.477 [2024-11-26 15:26:14.713951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:16.477 [2024-11-26 15:26:14.713982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.477 [2024-11-26 15:26:14.716083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.477 [2024-11-26 15:26:14.716154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.477 [2024-11-26 15:26:14.716261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:16.477 [2024-11-26 15:26:14.716333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.477 pt1 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.477 "name": "raid_bdev1", 00:10:16.477 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:16.477 "strip_size_kb": 64, 00:10:16.477 "state": "configuring", 00:10:16.477 "raid_level": "concat", 00:10:16.477 "superblock": true, 00:10:16.477 "num_base_bdevs": 4, 00:10:16.477 "num_base_bdevs_discovered": 1, 00:10:16.477 "num_base_bdevs_operational": 4, 00:10:16.477 "base_bdevs_list": [ 00:10:16.477 { 00:10:16.477 "name": "pt1", 00:10:16.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.477 "is_configured": true, 00:10:16.477 "data_offset": 2048, 00:10:16.477 "data_size": 63488 00:10:16.477 }, 00:10:16.477 { 00:10:16.477 "name": null, 00:10:16.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.477 "is_configured": false, 00:10:16.477 "data_offset": 2048, 00:10:16.477 "data_size": 63488 00:10:16.477 }, 00:10:16.477 { 00:10:16.477 "name": null, 00:10:16.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.477 "is_configured": false, 00:10:16.477 "data_offset": 2048, 00:10:16.477 "data_size": 63488 00:10:16.477 }, 00:10:16.477 { 00:10:16.477 "name": null, 00:10:16.477 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.477 "is_configured": false, 00:10:16.477 "data_offset": 2048, 00:10:16.477 "data_size": 63488 00:10:16.477 } 00:10:16.477 ] 00:10:16.477 }' 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.477 15:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 [2024-11-26 15:26:15.141946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.737 [2024-11-26 15:26:15.142068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.737 [2024-11-26 15:26:15.142104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:16.737 [2024-11-26 15:26:15.142133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.737 [2024-11-26 15:26:15.142555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.737 [2024-11-26 15:26:15.142614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.737 [2024-11-26 15:26:15.142711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.737 [2024-11-26 15:26:15.142762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.737 pt2 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 [2024-11-26 15:26:15.149928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.737 "name": "raid_bdev1", 00:10:16.737 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:16.737 "strip_size_kb": 64, 00:10:16.737 "state": "configuring", 00:10:16.737 "raid_level": "concat", 00:10:16.737 "superblock": true, 00:10:16.737 "num_base_bdevs": 4, 00:10:16.737 "num_base_bdevs_discovered": 1, 00:10:16.737 "num_base_bdevs_operational": 4, 00:10:16.737 "base_bdevs_list": [ 00:10:16.737 { 00:10:16.737 "name": "pt1", 00:10:16.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.737 "is_configured": true, 00:10:16.737 "data_offset": 2048, 00:10:16.737 "data_size": 63488 00:10:16.737 }, 00:10:16.737 { 00:10:16.737 "name": null, 00:10:16.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.737 "is_configured": false, 00:10:16.737 "data_offset": 0, 00:10:16.737 "data_size": 63488 00:10:16.737 }, 00:10:16.737 { 00:10:16.737 "name": null, 00:10:16.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.737 "is_configured": false, 00:10:16.737 "data_offset": 2048, 00:10:16.737 "data_size": 63488 00:10:16.737 }, 00:10:16.737 { 00:10:16.737 "name": null, 00:10:16.737 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.737 "is_configured": false, 00:10:16.737 "data_offset": 2048, 00:10:16.737 "data_size": 63488 00:10:16.737 } 00:10:16.737 ] 00:10:16.737 }' 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.737 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.306 [2024-11-26 15:26:15.590067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.306 [2024-11-26 15:26:15.590171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.306 [2024-11-26 15:26:15.590216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:17.306 [2024-11-26 15:26:15.590281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.306 [2024-11-26 15:26:15.590688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.306 [2024-11-26 15:26:15.590742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.306 [2024-11-26 15:26:15.590839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.306 [2024-11-26 15:26:15.590884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.306 pt2 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.306 [2024-11-26 15:26:15.602073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.306 [2024-11-26 15:26:15.602159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.306 [2024-11-26 15:26:15.602195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:17.306 [2024-11-26 15:26:15.602204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.306 [2024-11-26 15:26:15.602523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.306 [2024-11-26 15:26:15.602538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.306 [2024-11-26 15:26:15.602589] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:17.306 [2024-11-26 15:26:15.602604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.306 pt3 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.306 [2024-11-26 15:26:15.614059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:17.306 [2024-11-26 15:26:15.614108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.306 [2024-11-26 15:26:15.614126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:17.306 [2024-11-26 15:26:15.614133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.306 [2024-11-26 15:26:15.614461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.306 [2024-11-26 15:26:15.614477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:17.306 [2024-11-26 15:26:15.614534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:17.306 [2024-11-26 15:26:15.614551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:17.306 [2024-11-26 15:26:15.614647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:17.306 [2024-11-26 15:26:15.614655] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:17.306 [2024-11-26 15:26:15.614869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:17.306 [2024-11-26 15:26:15.614996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:17.306 [2024-11-26 15:26:15.615009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:17.306 [2024-11-26 15:26:15.615101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.306 pt4 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.306 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.306 "name": "raid_bdev1", 00:10:17.306 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:17.306 "strip_size_kb": 64, 00:10:17.306 "state": "online", 00:10:17.306 "raid_level": "concat", 00:10:17.306 "superblock": true, 00:10:17.306 "num_base_bdevs": 4, 00:10:17.306 "num_base_bdevs_discovered": 4, 00:10:17.306 "num_base_bdevs_operational": 4, 00:10:17.306 "base_bdevs_list": [ 00:10:17.306 { 00:10:17.306 "name": "pt1", 00:10:17.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.306 "is_configured": true, 00:10:17.306 "data_offset": 2048, 00:10:17.306 "data_size": 63488 00:10:17.306 }, 00:10:17.306 { 00:10:17.306 "name": "pt2", 00:10:17.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.307 "is_configured": true, 00:10:17.307 "data_offset": 2048, 00:10:17.307 "data_size": 63488 00:10:17.307 }, 00:10:17.307 { 00:10:17.307 "name": "pt3", 00:10:17.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.307 "is_configured": true, 00:10:17.307 "data_offset": 2048, 00:10:17.307 "data_size": 63488 00:10:17.307 }, 00:10:17.307 { 00:10:17.307 "name": "pt4", 00:10:17.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.307 "is_configured": true, 00:10:17.307 "data_offset": 2048, 00:10:17.307 "data_size": 63488 00:10:17.307 } 00:10:17.307 ] 00:10:17.307 }' 00:10:17.307 15:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.307 15:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.894 [2024-11-26 15:26:16.074537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.894 "name": "raid_bdev1", 00:10:17.894 "aliases": [ 00:10:17.894 "799dae47-95ae-43b2-8566-c1c22ad0f26e" 00:10:17.894 ], 00:10:17.894 "product_name": "Raid Volume", 00:10:17.894 "block_size": 512, 00:10:17.894 "num_blocks": 253952, 00:10:17.894 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:17.894 "assigned_rate_limits": { 00:10:17.894 "rw_ios_per_sec": 0, 00:10:17.894 "rw_mbytes_per_sec": 0, 00:10:17.894 "r_mbytes_per_sec": 0, 00:10:17.894 "w_mbytes_per_sec": 0 00:10:17.894 }, 00:10:17.894 "claimed": false, 00:10:17.894 "zoned": false, 00:10:17.894 "supported_io_types": { 00:10:17.894 "read": true, 00:10:17.894 "write": true, 00:10:17.894 "unmap": true, 00:10:17.894 "flush": true, 00:10:17.894 "reset": true, 00:10:17.894 "nvme_admin": false, 00:10:17.894 "nvme_io": false, 00:10:17.894 "nvme_io_md": false, 00:10:17.894 "write_zeroes": true, 00:10:17.894 "zcopy": false, 00:10:17.894 "get_zone_info": false, 00:10:17.894 "zone_management": false, 00:10:17.894 "zone_append": false, 00:10:17.894 "compare": false, 00:10:17.894 "compare_and_write": false, 00:10:17.894 "abort": false, 00:10:17.894 "seek_hole": false, 00:10:17.894 "seek_data": false, 00:10:17.894 "copy": false, 00:10:17.894 "nvme_iov_md": false 00:10:17.894 }, 00:10:17.894 "memory_domains": [ 00:10:17.894 { 00:10:17.894 "dma_device_id": "system", 00:10:17.894 "dma_device_type": 1 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.894 "dma_device_type": 2 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "dma_device_id": "system", 00:10:17.894 "dma_device_type": 1 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.894 "dma_device_type": 2 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "dma_device_id": "system", 00:10:17.894 "dma_device_type": 1 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.894 "dma_device_type": 2 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "dma_device_id": "system", 00:10:17.894 "dma_device_type": 1 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.894 "dma_device_type": 2 00:10:17.894 } 00:10:17.894 ], 00:10:17.894 "driver_specific": { 00:10:17.894 "raid": { 00:10:17.894 "uuid": "799dae47-95ae-43b2-8566-c1c22ad0f26e", 00:10:17.894 "strip_size_kb": 64, 00:10:17.894 "state": "online", 00:10:17.894 "raid_level": "concat", 00:10:17.894 "superblock": true, 00:10:17.894 "num_base_bdevs": 4, 00:10:17.894 "num_base_bdevs_discovered": 4, 00:10:17.894 "num_base_bdevs_operational": 4, 00:10:17.894 "base_bdevs_list": [ 00:10:17.894 { 00:10:17.894 "name": "pt1", 00:10:17.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.894 "is_configured": true, 00:10:17.894 "data_offset": 2048, 00:10:17.894 "data_size": 63488 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "name": "pt2", 00:10:17.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.894 "is_configured": true, 00:10:17.894 "data_offset": 2048, 00:10:17.894 "data_size": 63488 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "name": "pt3", 00:10:17.894 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.894 "is_configured": true, 00:10:17.894 "data_offset": 2048, 00:10:17.894 "data_size": 63488 00:10:17.894 }, 00:10:17.894 { 00:10:17.894 "name": "pt4", 00:10:17.894 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.894 "is_configured": true, 00:10:17.894 "data_offset": 2048, 00:10:17.894 "data_size": 63488 00:10:17.894 } 00:10:17.894 ] 00:10:17.894 } 00:10:17.894 } 00:10:17.894 }' 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.894 pt2 00:10:17.894 pt3 00:10:17.894 pt4' 00:10:17.894 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.895 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.155 [2024-11-26 15:26:16.394551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 799dae47-95ae-43b2-8566-c1c22ad0f26e '!=' 799dae47-95ae-43b2-8566-c1c22ad0f26e ']' 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85027 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85027 ']' 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85027 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85027 00:10:18.155 killing process with pid 85027 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85027' 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85027 00:10:18.155 [2024-11-26 15:26:16.469701] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.155 [2024-11-26 15:26:16.469784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.155 [2024-11-26 15:26:16.469863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.155 [2024-11-26 15:26:16.469872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:18.155 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85027 00:10:18.155 [2024-11-26 15:26:16.513049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.414 15:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:18.414 ************************************ 00:10:18.414 END TEST raid_superblock_test 00:10:18.414 ************************************ 00:10:18.414 00:10:18.414 real 0m4.048s 00:10:18.414 user 0m6.385s 00:10:18.414 sys 0m0.899s 00:10:18.414 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.414 15:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 15:26:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:18.414 15:26:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.414 15:26:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.414 15:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 ************************************ 00:10:18.414 START TEST raid_read_error_test 00:10:18.414 ************************************ 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.414 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U8azdwRJr1 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85270 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85270 00:10:18.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85270 ']' 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.415 15:26:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.674 [2024-11-26 15:26:16.907242] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:18.674 [2024-11-26 15:26:16.907364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85270 ] 00:10:18.674 [2024-11-26 15:26:17.042235] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:18.674 [2024-11-26 15:26:17.078503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.674 [2024-11-26 15:26:17.105522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.674 [2024-11-26 15:26:17.148476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.933 [2024-11-26 15:26:17.148590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 BaseBdev1_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 true 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 [2024-11-26 15:26:17.772251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.505 [2024-11-26 15:26:17.772319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.505 [2024-11-26 15:26:17.772355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:19.505 [2024-11-26 15:26:17.772368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.505 [2024-11-26 15:26:17.774515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.505 [2024-11-26 15:26:17.774556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.505 BaseBdev1 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 BaseBdev2_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 true 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 [2024-11-26 15:26:17.813099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.505 [2024-11-26 15:26:17.813162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.505 [2024-11-26 15:26:17.813213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:19.505 [2024-11-26 15:26:17.813224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.505 [2024-11-26 15:26:17.815312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.505 [2024-11-26 15:26:17.815348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.505 BaseBdev2 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 BaseBdev3_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 true 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.505 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.505 [2024-11-26 15:26:17.853943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:19.505 [2024-11-26 15:26:17.854009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.505 [2024-11-26 15:26:17.854028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:19.506 [2024-11-26 15:26:17.854038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.506 [2024-11-26 15:26:17.856114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.506 [2024-11-26 15:26:17.856238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:19.506 BaseBdev3 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.506 BaseBdev4_malloc 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.506 true 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.506 [2024-11-26 15:26:17.904587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:19.506 [2024-11-26 15:26:17.904654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.506 [2024-11-26 15:26:17.904673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.506 [2024-11-26 15:26:17.904685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.506 [2024-11-26 15:26:17.906750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.506 [2024-11-26 15:26:17.906791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:19.506 BaseBdev4 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.506 [2024-11-26 15:26:17.916633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.506 [2024-11-26 15:26:17.918506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.506 [2024-11-26 15:26:17.918580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.506 [2024-11-26 15:26:17.918633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.506 [2024-11-26 15:26:17.918829] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.506 [2024-11-26 15:26:17.918843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:19.506 [2024-11-26 15:26:17.919112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:19.506 [2024-11-26 15:26:17.919276] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.506 [2024-11-26 15:26:17.919286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:19.506 [2024-11-26 15:26:17.919429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.506 "name": "raid_bdev1", 00:10:19.506 "uuid": "d716476c-8c2b-41ba-862d-069776ce2b46", 00:10:19.506 "strip_size_kb": 64, 00:10:19.506 "state": "online", 00:10:19.506 "raid_level": "concat", 00:10:19.506 "superblock": true, 00:10:19.506 "num_base_bdevs": 4, 00:10:19.506 "num_base_bdevs_discovered": 4, 00:10:19.506 "num_base_bdevs_operational": 4, 00:10:19.506 "base_bdevs_list": [ 00:10:19.506 { 00:10:19.506 "name": "BaseBdev1", 00:10:19.506 "uuid": "ef53e6f1-264a-5536-9a66-352c211c01ac", 00:10:19.506 "is_configured": true, 00:10:19.506 "data_offset": 2048, 00:10:19.506 "data_size": 63488 00:10:19.506 }, 00:10:19.506 { 00:10:19.506 "name": "BaseBdev2", 00:10:19.506 "uuid": "6c72e6f5-52eb-590f-9f52-36020415c1cf", 00:10:19.506 "is_configured": true, 00:10:19.506 "data_offset": 2048, 00:10:19.506 "data_size": 63488 00:10:19.506 }, 00:10:19.506 { 00:10:19.506 "name": "BaseBdev3", 00:10:19.506 "uuid": "99d0b1d2-aeab-5cda-9933-f6f5138643d2", 00:10:19.506 "is_configured": true, 00:10:19.506 "data_offset": 2048, 00:10:19.506 "data_size": 63488 00:10:19.506 }, 00:10:19.506 { 00:10:19.506 "name": "BaseBdev4", 00:10:19.506 "uuid": "0c59b51e-840c-51bd-a896-86649c8d41d7", 00:10:19.506 "is_configured": true, 00:10:19.506 "data_offset": 2048, 00:10:19.506 "data_size": 63488 00:10:19.506 } 00:10:19.506 ] 00:10:19.506 }' 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.506 15:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.075 15:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:20.075 15:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.075 [2024-11-26 15:26:18.429142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.014 "name": "raid_bdev1", 00:10:21.014 "uuid": "d716476c-8c2b-41ba-862d-069776ce2b46", 00:10:21.014 "strip_size_kb": 64, 00:10:21.014 "state": "online", 00:10:21.014 "raid_level": "concat", 00:10:21.014 "superblock": true, 00:10:21.014 "num_base_bdevs": 4, 00:10:21.014 "num_base_bdevs_discovered": 4, 00:10:21.014 "num_base_bdevs_operational": 4, 00:10:21.014 "base_bdevs_list": [ 00:10:21.014 { 00:10:21.014 "name": "BaseBdev1", 00:10:21.014 "uuid": "ef53e6f1-264a-5536-9a66-352c211c01ac", 00:10:21.014 "is_configured": true, 00:10:21.014 "data_offset": 2048, 00:10:21.014 "data_size": 63488 00:10:21.014 }, 00:10:21.014 { 00:10:21.014 "name": "BaseBdev2", 00:10:21.014 "uuid": "6c72e6f5-52eb-590f-9f52-36020415c1cf", 00:10:21.014 "is_configured": true, 00:10:21.014 "data_offset": 2048, 00:10:21.014 "data_size": 63488 00:10:21.014 }, 00:10:21.014 { 00:10:21.014 "name": "BaseBdev3", 00:10:21.014 "uuid": "99d0b1d2-aeab-5cda-9933-f6f5138643d2", 00:10:21.014 "is_configured": true, 00:10:21.014 "data_offset": 2048, 00:10:21.014 "data_size": 63488 00:10:21.014 }, 00:10:21.014 { 00:10:21.014 "name": "BaseBdev4", 00:10:21.014 "uuid": "0c59b51e-840c-51bd-a896-86649c8d41d7", 00:10:21.014 "is_configured": true, 00:10:21.014 "data_offset": 2048, 00:10:21.014 "data_size": 63488 00:10:21.014 } 00:10:21.014 ] 00:10:21.014 }' 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.014 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.581 [2024-11-26 15:26:19.823837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.581 [2024-11-26 15:26:19.823937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.581 [2024-11-26 15:26:19.826412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.581 [2024-11-26 15:26:19.826509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.581 [2024-11-26 15:26:19.826569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.581 [2024-11-26 15:26:19.826641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:21.581 { 00:10:21.581 "results": [ 00:10:21.581 { 00:10:21.581 "job": "raid_bdev1", 00:10:21.581 "core_mask": "0x1", 00:10:21.581 "workload": "randrw", 00:10:21.581 "percentage": 50, 00:10:21.581 "status": "finished", 00:10:21.581 "queue_depth": 1, 00:10:21.581 "io_size": 131072, 00:10:21.581 "runtime": 1.392922, 00:10:21.581 "iops": 16690.094635593377, 00:10:21.581 "mibps": 2086.261829449172, 00:10:21.581 "io_failed": 1, 00:10:21.581 "io_timeout": 0, 00:10:21.581 "avg_latency_us": 83.26396019983477, 00:10:21.581 "min_latency_us": 24.43301664778175, 00:10:21.581 "max_latency_us": 1378.0667654493159 00:10:21.581 } 00:10:21.581 ], 00:10:21.581 "core_count": 1 00:10:21.581 } 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85270 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85270 ']' 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85270 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:21.581 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.582 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85270 00:10:21.582 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.582 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.582 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85270' 00:10:21.582 killing process with pid 85270 00:10:21.582 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85270 00:10:21.582 [2024-11-26 15:26:19.860579] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.582 15:26:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85270 00:10:21.582 [2024-11-26 15:26:19.895790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U8azdwRJr1 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:21.842 00:10:21.842 real 0m3.308s 00:10:21.842 user 0m4.185s 00:10:21.842 sys 0m0.517s 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.842 ************************************ 00:10:21.842 END TEST raid_read_error_test 00:10:21.842 ************************************ 00:10:21.842 15:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.842 15:26:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:21.842 15:26:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:21.842 15:26:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.842 15:26:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.842 ************************************ 00:10:21.842 START TEST raid_write_error_test 00:10:21.842 ************************************ 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LENDkG442r 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85409 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85409 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85409 ']' 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.842 15:26:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.842 [2024-11-26 15:26:20.282592] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:21.842 [2024-11-26 15:26:20.282721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85409 ] 00:10:22.102 [2024-11-26 15:26:20.417383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:22.102 [2024-11-26 15:26:20.454702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.102 [2024-11-26 15:26:20.481298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.102 [2024-11-26 15:26:20.524265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.102 [2024-11-26 15:26:20.524302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.671 BaseBdev1_malloc 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.671 true 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.671 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.671 [2024-11-26 15:26:21.143798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:22.671 [2024-11-26 15:26:21.143855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.671 [2024-11-26 15:26:21.143870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:22.671 [2024-11-26 15:26:21.143882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.931 [2024-11-26 15:26:21.145950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.931 [2024-11-26 15:26:21.145992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:22.931 BaseBdev1 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.931 BaseBdev2_malloc 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.931 true 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.931 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 [2024-11-26 15:26:21.184445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.932 [2024-11-26 15:26:21.184495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.932 [2024-11-26 15:26:21.184511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.932 [2024-11-26 15:26:21.184521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.932 [2024-11-26 15:26:21.186545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.932 [2024-11-26 15:26:21.186580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.932 BaseBdev2 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 BaseBdev3_malloc 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 true 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 [2024-11-26 15:26:21.225314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:22.932 [2024-11-26 15:26:21.225430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.932 [2024-11-26 15:26:21.225453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:22.932 [2024-11-26 15:26:21.225464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.932 [2024-11-26 15:26:21.227575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.932 [2024-11-26 15:26:21.227612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:22.932 BaseBdev3 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 BaseBdev4_malloc 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 true 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 [2024-11-26 15:26:21.283112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:22.932 [2024-11-26 15:26:21.283170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.932 [2024-11-26 15:26:21.283199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.932 [2024-11-26 15:26:21.283210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.932 [2024-11-26 15:26:21.285374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.932 [2024-11-26 15:26:21.285477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:22.932 BaseBdev4 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 [2024-11-26 15:26:21.295139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.932 [2024-11-26 15:26:21.297017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.932 [2024-11-26 15:26:21.297156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.932 [2024-11-26 15:26:21.297229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.932 [2024-11-26 15:26:21.297436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:22.932 [2024-11-26 15:26:21.297450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.932 [2024-11-26 15:26:21.297700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:22.932 [2024-11-26 15:26:21.297833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:22.932 [2024-11-26 15:26:21.297843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:22.932 [2024-11-26 15:26:21.297974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.932 "name": "raid_bdev1", 00:10:22.932 "uuid": "fa947fc6-52d6-4434-b82f-7cd0967cc9e1", 00:10:22.932 "strip_size_kb": 64, 00:10:22.932 "state": "online", 00:10:22.932 "raid_level": "concat", 00:10:22.932 "superblock": true, 00:10:22.932 "num_base_bdevs": 4, 00:10:22.932 "num_base_bdevs_discovered": 4, 00:10:22.932 "num_base_bdevs_operational": 4, 00:10:22.932 "base_bdevs_list": [ 00:10:22.932 { 00:10:22.932 "name": "BaseBdev1", 00:10:22.932 "uuid": "57614cff-0063-55d7-a969-a25f05afa007", 00:10:22.932 "is_configured": true, 00:10:22.932 "data_offset": 2048, 00:10:22.932 "data_size": 63488 00:10:22.932 }, 00:10:22.932 { 00:10:22.932 "name": "BaseBdev2", 00:10:22.932 "uuid": "9cfac8c4-b664-5e71-8906-50d9ac26c87c", 00:10:22.932 "is_configured": true, 00:10:22.932 "data_offset": 2048, 00:10:22.932 "data_size": 63488 00:10:22.932 }, 00:10:22.932 { 00:10:22.932 "name": "BaseBdev3", 00:10:22.932 "uuid": "d1e7917c-56f2-5dfd-8f63-8c667ab039e0", 00:10:22.932 "is_configured": true, 00:10:22.932 "data_offset": 2048, 00:10:22.932 "data_size": 63488 00:10:22.932 }, 00:10:22.932 { 00:10:22.932 "name": "BaseBdev4", 00:10:22.932 "uuid": "32f031eb-0054-5c6e-b28d-54829f2a3d76", 00:10:22.932 "is_configured": true, 00:10:22.932 "data_offset": 2048, 00:10:22.932 "data_size": 63488 00:10:22.932 } 00:10:22.932 ] 00:10:22.932 }' 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.932 15:26:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.502 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:23.502 15:26:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:23.502 [2024-11-26 15:26:21.779650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.466 "name": "raid_bdev1", 00:10:24.466 "uuid": "fa947fc6-52d6-4434-b82f-7cd0967cc9e1", 00:10:24.466 "strip_size_kb": 64, 00:10:24.466 "state": "online", 00:10:24.466 "raid_level": "concat", 00:10:24.466 "superblock": true, 00:10:24.466 "num_base_bdevs": 4, 00:10:24.466 "num_base_bdevs_discovered": 4, 00:10:24.466 "num_base_bdevs_operational": 4, 00:10:24.466 "base_bdevs_list": [ 00:10:24.466 { 00:10:24.466 "name": "BaseBdev1", 00:10:24.466 "uuid": "57614cff-0063-55d7-a969-a25f05afa007", 00:10:24.466 "is_configured": true, 00:10:24.466 "data_offset": 2048, 00:10:24.466 "data_size": 63488 00:10:24.466 }, 00:10:24.466 { 00:10:24.466 "name": "BaseBdev2", 00:10:24.466 "uuid": "9cfac8c4-b664-5e71-8906-50d9ac26c87c", 00:10:24.466 "is_configured": true, 00:10:24.466 "data_offset": 2048, 00:10:24.466 "data_size": 63488 00:10:24.466 }, 00:10:24.466 { 00:10:24.466 "name": "BaseBdev3", 00:10:24.466 "uuid": "d1e7917c-56f2-5dfd-8f63-8c667ab039e0", 00:10:24.466 "is_configured": true, 00:10:24.466 "data_offset": 2048, 00:10:24.466 "data_size": 63488 00:10:24.466 }, 00:10:24.466 { 00:10:24.466 "name": "BaseBdev4", 00:10:24.466 "uuid": "32f031eb-0054-5c6e-b28d-54829f2a3d76", 00:10:24.466 "is_configured": true, 00:10:24.466 "data_offset": 2048, 00:10:24.466 "data_size": 63488 00:10:24.466 } 00:10:24.466 ] 00:10:24.466 }' 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.466 15:26:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.736 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.736 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.736 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.736 [2024-11-26 15:26:23.154259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.736 [2024-11-26 15:26:23.154356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.736 [2024-11-26 15:26:23.156914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.736 [2024-11-26 15:26:23.156972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.736 [2024-11-26 15:26:23.157016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.736 [2024-11-26 15:26:23.157030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:24.736 { 00:10:24.736 "results": [ 00:10:24.736 { 00:10:24.736 "job": "raid_bdev1", 00:10:24.736 "core_mask": "0x1", 00:10:24.736 "workload": "randrw", 00:10:24.736 "percentage": 50, 00:10:24.736 "status": "finished", 00:10:24.736 "queue_depth": 1, 00:10:24.737 "io_size": 131072, 00:10:24.737 "runtime": 1.372739, 00:10:24.737 "iops": 16637.53998392994, 00:10:24.737 "mibps": 2079.6924979912424, 00:10:24.737 "io_failed": 1, 00:10:24.737 "io_timeout": 0, 00:10:24.737 "avg_latency_us": 83.3882690846689, 00:10:24.737 "min_latency_us": 25.883378366599842, 00:10:24.737 "max_latency_us": 1449.4691885295913 00:10:24.737 } 00:10:24.737 ], 00:10:24.737 "core_count": 1 00:10:24.737 } 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85409 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85409 ']' 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85409 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85409 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85409' 00:10:24.737 killing process with pid 85409 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85409 00:10:24.737 [2024-11-26 15:26:23.198613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.737 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85409 00:10:24.997 [2024-11-26 15:26:23.232788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LENDkG442r 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.997 15:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:24.997 ************************************ 00:10:24.997 END TEST raid_write_error_test 00:10:24.997 ************************************ 00:10:24.997 00:10:24.997 real 0m3.272s 00:10:24.997 user 0m4.099s 00:10:24.998 sys 0m0.527s 00:10:24.998 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.998 15:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.258 15:26:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:25.258 15:26:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:25.258 15:26:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.258 15:26:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.258 15:26:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.258 ************************************ 00:10:25.258 START TEST raid_state_function_test 00:10:25.258 ************************************ 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:25.258 Process raid pid: 85537 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85537 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85537' 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85537 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 85537 ']' 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.258 15:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.258 [2024-11-26 15:26:23.618151] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:25.258 [2024-11-26 15:26:23.618386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.518 [2024-11-26 15:26:23.754282] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:25.519 [2024-11-26 15:26:23.790903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.519 [2024-11-26 15:26:23.815768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.519 [2024-11-26 15:26:23.858583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.519 [2024-11-26 15:26:23.858615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.089 [2024-11-26 15:26:24.445348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.089 [2024-11-26 15:26:24.445403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.089 [2024-11-26 15:26:24.445422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.089 [2024-11-26 15:26:24.445430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.089 [2024-11-26 15:26:24.445440] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.089 [2024-11-26 15:26:24.445447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.089 [2024-11-26 15:26:24.445456] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.089 [2024-11-26 15:26:24.445463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.089 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.090 "name": "Existed_Raid", 00:10:26.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.090 "strip_size_kb": 0, 00:10:26.090 "state": "configuring", 00:10:26.090 "raid_level": "raid1", 00:10:26.090 "superblock": false, 00:10:26.090 "num_base_bdevs": 4, 00:10:26.090 "num_base_bdevs_discovered": 0, 00:10:26.090 "num_base_bdevs_operational": 4, 00:10:26.090 "base_bdevs_list": [ 00:10:26.090 { 00:10:26.090 "name": "BaseBdev1", 00:10:26.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.090 "is_configured": false, 00:10:26.090 "data_offset": 0, 00:10:26.090 "data_size": 0 00:10:26.090 }, 00:10:26.090 { 00:10:26.090 "name": "BaseBdev2", 00:10:26.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.090 "is_configured": false, 00:10:26.090 "data_offset": 0, 00:10:26.090 "data_size": 0 00:10:26.090 }, 00:10:26.090 { 00:10:26.090 "name": "BaseBdev3", 00:10:26.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.090 "is_configured": false, 00:10:26.090 "data_offset": 0, 00:10:26.090 "data_size": 0 00:10:26.090 }, 00:10:26.090 { 00:10:26.090 "name": "BaseBdev4", 00:10:26.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.090 "is_configured": false, 00:10:26.090 "data_offset": 0, 00:10:26.090 "data_size": 0 00:10:26.090 } 00:10:26.090 ] 00:10:26.090 }' 00:10:26.090 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.090 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.659 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.659 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.659 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.659 [2024-11-26 15:26:24.881363] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.659 [2024-11-26 15:26:24.881450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:26.659 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.659 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.659 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.659 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.659 [2024-11-26 15:26:24.893386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.659 [2024-11-26 15:26:24.893467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.659 [2024-11-26 15:26:24.893500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.659 [2024-11-26 15:26:24.893521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.659 [2024-11-26 15:26:24.893567] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.659 [2024-11-26 15:26:24.893596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.660 [2024-11-26 15:26:24.893624] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.660 [2024-11-26 15:26:24.893654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.660 [2024-11-26 15:26:24.914301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.660 BaseBdev1 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.660 [ 00:10:26.660 { 00:10:26.660 "name": "BaseBdev1", 00:10:26.660 "aliases": [ 00:10:26.660 "b4d1bf66-e843-4549-b44c-a4dd267dd4e9" 00:10:26.660 ], 00:10:26.660 "product_name": "Malloc disk", 00:10:26.660 "block_size": 512, 00:10:26.660 "num_blocks": 65536, 00:10:26.660 "uuid": "b4d1bf66-e843-4549-b44c-a4dd267dd4e9", 00:10:26.660 "assigned_rate_limits": { 00:10:26.660 "rw_ios_per_sec": 0, 00:10:26.660 "rw_mbytes_per_sec": 0, 00:10:26.660 "r_mbytes_per_sec": 0, 00:10:26.660 "w_mbytes_per_sec": 0 00:10:26.660 }, 00:10:26.660 "claimed": true, 00:10:26.660 "claim_type": "exclusive_write", 00:10:26.660 "zoned": false, 00:10:26.660 "supported_io_types": { 00:10:26.660 "read": true, 00:10:26.660 "write": true, 00:10:26.660 "unmap": true, 00:10:26.660 "flush": true, 00:10:26.660 "reset": true, 00:10:26.660 "nvme_admin": false, 00:10:26.660 "nvme_io": false, 00:10:26.660 "nvme_io_md": false, 00:10:26.660 "write_zeroes": true, 00:10:26.660 "zcopy": true, 00:10:26.660 "get_zone_info": false, 00:10:26.660 "zone_management": false, 00:10:26.660 "zone_append": false, 00:10:26.660 "compare": false, 00:10:26.660 "compare_and_write": false, 00:10:26.660 "abort": true, 00:10:26.660 "seek_hole": false, 00:10:26.660 "seek_data": false, 00:10:26.660 "copy": true, 00:10:26.660 "nvme_iov_md": false 00:10:26.660 }, 00:10:26.660 "memory_domains": [ 00:10:26.660 { 00:10:26.660 "dma_device_id": "system", 00:10:26.660 "dma_device_type": 1 00:10:26.660 }, 00:10:26.660 { 00:10:26.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.660 "dma_device_type": 2 00:10:26.660 } 00:10:26.660 ], 00:10:26.660 "driver_specific": {} 00:10:26.660 } 00:10:26.660 ] 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.660 15:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.660 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.660 "name": "Existed_Raid", 00:10:26.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.660 "strip_size_kb": 0, 00:10:26.660 "state": "configuring", 00:10:26.660 "raid_level": "raid1", 00:10:26.660 "superblock": false, 00:10:26.660 "num_base_bdevs": 4, 00:10:26.660 "num_base_bdevs_discovered": 1, 00:10:26.660 "num_base_bdevs_operational": 4, 00:10:26.660 "base_bdevs_list": [ 00:10:26.660 { 00:10:26.660 "name": "BaseBdev1", 00:10:26.660 "uuid": "b4d1bf66-e843-4549-b44c-a4dd267dd4e9", 00:10:26.660 "is_configured": true, 00:10:26.660 "data_offset": 0, 00:10:26.660 "data_size": 65536 00:10:26.660 }, 00:10:26.660 { 00:10:26.660 "name": "BaseBdev2", 00:10:26.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.660 "is_configured": false, 00:10:26.660 "data_offset": 0, 00:10:26.660 "data_size": 0 00:10:26.660 }, 00:10:26.660 { 00:10:26.660 "name": "BaseBdev3", 00:10:26.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.660 "is_configured": false, 00:10:26.660 "data_offset": 0, 00:10:26.660 "data_size": 0 00:10:26.660 }, 00:10:26.660 { 00:10:26.660 "name": "BaseBdev4", 00:10:26.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.660 "is_configured": false, 00:10:26.660 "data_offset": 0, 00:10:26.660 "data_size": 0 00:10:26.660 } 00:10:26.660 ] 00:10:26.660 }' 00:10:26.660 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.660 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.920 [2024-11-26 15:26:25.322444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.920 [2024-11-26 15:26:25.322551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.920 [2024-11-26 15:26:25.334475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.920 [2024-11-26 15:26:25.336399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.920 [2024-11-26 15:26:25.336439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.920 [2024-11-26 15:26:25.336450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.920 [2024-11-26 15:26:25.336458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.920 [2024-11-26 15:26:25.336465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.920 [2024-11-26 15:26:25.336472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.920 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.920 "name": "Existed_Raid", 00:10:26.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.920 "strip_size_kb": 0, 00:10:26.920 "state": "configuring", 00:10:26.920 "raid_level": "raid1", 00:10:26.920 "superblock": false, 00:10:26.920 "num_base_bdevs": 4, 00:10:26.920 "num_base_bdevs_discovered": 1, 00:10:26.920 "num_base_bdevs_operational": 4, 00:10:26.920 "base_bdevs_list": [ 00:10:26.920 { 00:10:26.920 "name": "BaseBdev1", 00:10:26.920 "uuid": "b4d1bf66-e843-4549-b44c-a4dd267dd4e9", 00:10:26.920 "is_configured": true, 00:10:26.920 "data_offset": 0, 00:10:26.920 "data_size": 65536 00:10:26.920 }, 00:10:26.920 { 00:10:26.920 "name": "BaseBdev2", 00:10:26.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.920 "is_configured": false, 00:10:26.921 "data_offset": 0, 00:10:26.921 "data_size": 0 00:10:26.921 }, 00:10:26.921 { 00:10:26.921 "name": "BaseBdev3", 00:10:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.921 "is_configured": false, 00:10:26.921 "data_offset": 0, 00:10:26.921 "data_size": 0 00:10:26.921 }, 00:10:26.921 { 00:10:26.921 "name": "BaseBdev4", 00:10:26.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.921 "is_configured": false, 00:10:26.921 "data_offset": 0, 00:10:26.921 "data_size": 0 00:10:26.921 } 00:10:26.921 ] 00:10:26.921 }' 00:10:27.180 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.180 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 [2024-11-26 15:26:25.778858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.439 BaseBdev2 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.439 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.439 [ 00:10:27.439 { 00:10:27.439 "name": "BaseBdev2", 00:10:27.439 "aliases": [ 00:10:27.439 "d3c0911c-4449-413b-85b0-20d81e99290c" 00:10:27.439 ], 00:10:27.439 "product_name": "Malloc disk", 00:10:27.439 "block_size": 512, 00:10:27.439 "num_blocks": 65536, 00:10:27.439 "uuid": "d3c0911c-4449-413b-85b0-20d81e99290c", 00:10:27.439 "assigned_rate_limits": { 00:10:27.439 "rw_ios_per_sec": 0, 00:10:27.439 "rw_mbytes_per_sec": 0, 00:10:27.439 "r_mbytes_per_sec": 0, 00:10:27.439 "w_mbytes_per_sec": 0 00:10:27.439 }, 00:10:27.439 "claimed": true, 00:10:27.439 "claim_type": "exclusive_write", 00:10:27.440 "zoned": false, 00:10:27.440 "supported_io_types": { 00:10:27.440 "read": true, 00:10:27.440 "write": true, 00:10:27.440 "unmap": true, 00:10:27.440 "flush": true, 00:10:27.440 "reset": true, 00:10:27.440 "nvme_admin": false, 00:10:27.440 "nvme_io": false, 00:10:27.440 "nvme_io_md": false, 00:10:27.440 "write_zeroes": true, 00:10:27.440 "zcopy": true, 00:10:27.440 "get_zone_info": false, 00:10:27.440 "zone_management": false, 00:10:27.440 "zone_append": false, 00:10:27.440 "compare": false, 00:10:27.440 "compare_and_write": false, 00:10:27.440 "abort": true, 00:10:27.440 "seek_hole": false, 00:10:27.440 "seek_data": false, 00:10:27.440 "copy": true, 00:10:27.440 "nvme_iov_md": false 00:10:27.440 }, 00:10:27.440 "memory_domains": [ 00:10:27.440 { 00:10:27.440 "dma_device_id": "system", 00:10:27.440 "dma_device_type": 1 00:10:27.440 }, 00:10:27.440 { 00:10:27.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.440 "dma_device_type": 2 00:10:27.440 } 00:10:27.440 ], 00:10:27.440 "driver_specific": {} 00:10:27.440 } 00:10:27.440 ] 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.440 "name": "Existed_Raid", 00:10:27.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.440 "strip_size_kb": 0, 00:10:27.440 "state": "configuring", 00:10:27.440 "raid_level": "raid1", 00:10:27.440 "superblock": false, 00:10:27.440 "num_base_bdevs": 4, 00:10:27.440 "num_base_bdevs_discovered": 2, 00:10:27.440 "num_base_bdevs_operational": 4, 00:10:27.440 "base_bdevs_list": [ 00:10:27.440 { 00:10:27.440 "name": "BaseBdev1", 00:10:27.440 "uuid": "b4d1bf66-e843-4549-b44c-a4dd267dd4e9", 00:10:27.440 "is_configured": true, 00:10:27.440 "data_offset": 0, 00:10:27.440 "data_size": 65536 00:10:27.440 }, 00:10:27.440 { 00:10:27.440 "name": "BaseBdev2", 00:10:27.440 "uuid": "d3c0911c-4449-413b-85b0-20d81e99290c", 00:10:27.440 "is_configured": true, 00:10:27.440 "data_offset": 0, 00:10:27.440 "data_size": 65536 00:10:27.440 }, 00:10:27.440 { 00:10:27.440 "name": "BaseBdev3", 00:10:27.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.440 "is_configured": false, 00:10:27.440 "data_offset": 0, 00:10:27.440 "data_size": 0 00:10:27.440 }, 00:10:27.440 { 00:10:27.440 "name": "BaseBdev4", 00:10:27.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.440 "is_configured": false, 00:10:27.440 "data_offset": 0, 00:10:27.440 "data_size": 0 00:10:27.440 } 00:10:27.440 ] 00:10:27.440 }' 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.440 15:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.010 [2024-11-26 15:26:26.241572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.010 BaseBdev3 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.010 [ 00:10:28.010 { 00:10:28.010 "name": "BaseBdev3", 00:10:28.010 "aliases": [ 00:10:28.010 "6a6a5131-c866-4f76-8dd2-aec7373dcb8d" 00:10:28.010 ], 00:10:28.010 "product_name": "Malloc disk", 00:10:28.010 "block_size": 512, 00:10:28.010 "num_blocks": 65536, 00:10:28.010 "uuid": "6a6a5131-c866-4f76-8dd2-aec7373dcb8d", 00:10:28.010 "assigned_rate_limits": { 00:10:28.010 "rw_ios_per_sec": 0, 00:10:28.010 "rw_mbytes_per_sec": 0, 00:10:28.010 "r_mbytes_per_sec": 0, 00:10:28.010 "w_mbytes_per_sec": 0 00:10:28.010 }, 00:10:28.010 "claimed": true, 00:10:28.010 "claim_type": "exclusive_write", 00:10:28.010 "zoned": false, 00:10:28.010 "supported_io_types": { 00:10:28.010 "read": true, 00:10:28.010 "write": true, 00:10:28.010 "unmap": true, 00:10:28.010 "flush": true, 00:10:28.010 "reset": true, 00:10:28.010 "nvme_admin": false, 00:10:28.010 "nvme_io": false, 00:10:28.010 "nvme_io_md": false, 00:10:28.010 "write_zeroes": true, 00:10:28.010 "zcopy": true, 00:10:28.010 "get_zone_info": false, 00:10:28.010 "zone_management": false, 00:10:28.010 "zone_append": false, 00:10:28.010 "compare": false, 00:10:28.010 "compare_and_write": false, 00:10:28.010 "abort": true, 00:10:28.010 "seek_hole": false, 00:10:28.010 "seek_data": false, 00:10:28.010 "copy": true, 00:10:28.010 "nvme_iov_md": false 00:10:28.010 }, 00:10:28.010 "memory_domains": [ 00:10:28.010 { 00:10:28.010 "dma_device_id": "system", 00:10:28.010 "dma_device_type": 1 00:10:28.010 }, 00:10:28.010 { 00:10:28.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.010 "dma_device_type": 2 00:10:28.010 } 00:10:28.010 ], 00:10:28.010 "driver_specific": {} 00:10:28.010 } 00:10:28.010 ] 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.010 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.010 "name": "Existed_Raid", 00:10:28.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.010 "strip_size_kb": 0, 00:10:28.010 "state": "configuring", 00:10:28.010 "raid_level": "raid1", 00:10:28.010 "superblock": false, 00:10:28.010 "num_base_bdevs": 4, 00:10:28.010 "num_base_bdevs_discovered": 3, 00:10:28.010 "num_base_bdevs_operational": 4, 00:10:28.010 "base_bdevs_list": [ 00:10:28.010 { 00:10:28.010 "name": "BaseBdev1", 00:10:28.010 "uuid": "b4d1bf66-e843-4549-b44c-a4dd267dd4e9", 00:10:28.010 "is_configured": true, 00:10:28.010 "data_offset": 0, 00:10:28.010 "data_size": 65536 00:10:28.010 }, 00:10:28.010 { 00:10:28.010 "name": "BaseBdev2", 00:10:28.010 "uuid": "d3c0911c-4449-413b-85b0-20d81e99290c", 00:10:28.010 "is_configured": true, 00:10:28.010 "data_offset": 0, 00:10:28.010 "data_size": 65536 00:10:28.010 }, 00:10:28.010 { 00:10:28.010 "name": "BaseBdev3", 00:10:28.010 "uuid": "6a6a5131-c866-4f76-8dd2-aec7373dcb8d", 00:10:28.010 "is_configured": true, 00:10:28.010 "data_offset": 0, 00:10:28.010 "data_size": 65536 00:10:28.010 }, 00:10:28.010 { 00:10:28.011 "name": "BaseBdev4", 00:10:28.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.011 "is_configured": false, 00:10:28.011 "data_offset": 0, 00:10:28.011 "data_size": 0 00:10:28.011 } 00:10:28.011 ] 00:10:28.011 }' 00:10:28.011 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.011 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.271 [2024-11-26 15:26:26.717004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.271 [2024-11-26 15:26:26.717065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:28.271 [2024-11-26 15:26:26.717077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:28.271 [2024-11-26 15:26:26.717372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:28.271 [2024-11-26 15:26:26.717539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:28.271 [2024-11-26 15:26:26.717562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:28.271 BaseBdev4 00:10:28.271 [2024-11-26 15:26:26.717779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.271 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.271 [ 00:10:28.271 { 00:10:28.271 "name": "BaseBdev4", 00:10:28.271 "aliases": [ 00:10:28.271 "52560eb7-9f56-4501-a15a-3706565a7d82" 00:10:28.271 ], 00:10:28.271 "product_name": "Malloc disk", 00:10:28.271 "block_size": 512, 00:10:28.271 "num_blocks": 65536, 00:10:28.271 "uuid": "52560eb7-9f56-4501-a15a-3706565a7d82", 00:10:28.271 "assigned_rate_limits": { 00:10:28.271 "rw_ios_per_sec": 0, 00:10:28.532 "rw_mbytes_per_sec": 0, 00:10:28.532 "r_mbytes_per_sec": 0, 00:10:28.532 "w_mbytes_per_sec": 0 00:10:28.532 }, 00:10:28.532 "claimed": true, 00:10:28.532 "claim_type": "exclusive_write", 00:10:28.532 "zoned": false, 00:10:28.532 "supported_io_types": { 00:10:28.532 "read": true, 00:10:28.532 "write": true, 00:10:28.532 "unmap": true, 00:10:28.532 "flush": true, 00:10:28.532 "reset": true, 00:10:28.532 "nvme_admin": false, 00:10:28.532 "nvme_io": false, 00:10:28.532 "nvme_io_md": false, 00:10:28.532 "write_zeroes": true, 00:10:28.532 "zcopy": true, 00:10:28.532 "get_zone_info": false, 00:10:28.532 "zone_management": false, 00:10:28.532 "zone_append": false, 00:10:28.532 "compare": false, 00:10:28.532 "compare_and_write": false, 00:10:28.532 "abort": true, 00:10:28.532 "seek_hole": false, 00:10:28.532 "seek_data": false, 00:10:28.532 "copy": true, 00:10:28.532 "nvme_iov_md": false 00:10:28.532 }, 00:10:28.532 "memory_domains": [ 00:10:28.532 { 00:10:28.532 "dma_device_id": "system", 00:10:28.532 "dma_device_type": 1 00:10:28.532 }, 00:10:28.532 { 00:10:28.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.532 "dma_device_type": 2 00:10:28.532 } 00:10:28.532 ], 00:10:28.532 "driver_specific": {} 00:10:28.532 } 00:10:28.532 ] 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.532 "name": "Existed_Raid", 00:10:28.532 "uuid": "6e646fa9-9c71-45a0-b7dd-705a91695a24", 00:10:28.532 "strip_size_kb": 0, 00:10:28.532 "state": "online", 00:10:28.532 "raid_level": "raid1", 00:10:28.532 "superblock": false, 00:10:28.532 "num_base_bdevs": 4, 00:10:28.532 "num_base_bdevs_discovered": 4, 00:10:28.532 "num_base_bdevs_operational": 4, 00:10:28.532 "base_bdevs_list": [ 00:10:28.532 { 00:10:28.532 "name": "BaseBdev1", 00:10:28.532 "uuid": "b4d1bf66-e843-4549-b44c-a4dd267dd4e9", 00:10:28.532 "is_configured": true, 00:10:28.532 "data_offset": 0, 00:10:28.532 "data_size": 65536 00:10:28.532 }, 00:10:28.532 { 00:10:28.532 "name": "BaseBdev2", 00:10:28.532 "uuid": "d3c0911c-4449-413b-85b0-20d81e99290c", 00:10:28.532 "is_configured": true, 00:10:28.532 "data_offset": 0, 00:10:28.532 "data_size": 65536 00:10:28.532 }, 00:10:28.532 { 00:10:28.532 "name": "BaseBdev3", 00:10:28.532 "uuid": "6a6a5131-c866-4f76-8dd2-aec7373dcb8d", 00:10:28.532 "is_configured": true, 00:10:28.532 "data_offset": 0, 00:10:28.532 "data_size": 65536 00:10:28.532 }, 00:10:28.532 { 00:10:28.532 "name": "BaseBdev4", 00:10:28.532 "uuid": "52560eb7-9f56-4501-a15a-3706565a7d82", 00:10:28.532 "is_configured": true, 00:10:28.532 "data_offset": 0, 00:10:28.532 "data_size": 65536 00:10:28.532 } 00:10:28.532 ] 00:10:28.532 }' 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.532 15:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.791 [2024-11-26 15:26:27.193504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.791 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.791 "name": "Existed_Raid", 00:10:28.791 "aliases": [ 00:10:28.791 "6e646fa9-9c71-45a0-b7dd-705a91695a24" 00:10:28.791 ], 00:10:28.791 "product_name": "Raid Volume", 00:10:28.791 "block_size": 512, 00:10:28.791 "num_blocks": 65536, 00:10:28.791 "uuid": "6e646fa9-9c71-45a0-b7dd-705a91695a24", 00:10:28.791 "assigned_rate_limits": { 00:10:28.791 "rw_ios_per_sec": 0, 00:10:28.791 "rw_mbytes_per_sec": 0, 00:10:28.791 "r_mbytes_per_sec": 0, 00:10:28.791 "w_mbytes_per_sec": 0 00:10:28.791 }, 00:10:28.791 "claimed": false, 00:10:28.791 "zoned": false, 00:10:28.791 "supported_io_types": { 00:10:28.791 "read": true, 00:10:28.791 "write": true, 00:10:28.791 "unmap": false, 00:10:28.791 "flush": false, 00:10:28.791 "reset": true, 00:10:28.791 "nvme_admin": false, 00:10:28.791 "nvme_io": false, 00:10:28.791 "nvme_io_md": false, 00:10:28.791 "write_zeroes": true, 00:10:28.791 "zcopy": false, 00:10:28.791 "get_zone_info": false, 00:10:28.791 "zone_management": false, 00:10:28.791 "zone_append": false, 00:10:28.791 "compare": false, 00:10:28.791 "compare_and_write": false, 00:10:28.791 "abort": false, 00:10:28.791 "seek_hole": false, 00:10:28.791 "seek_data": false, 00:10:28.791 "copy": false, 00:10:28.791 "nvme_iov_md": false 00:10:28.791 }, 00:10:28.791 "memory_domains": [ 00:10:28.791 { 00:10:28.791 "dma_device_id": "system", 00:10:28.791 "dma_device_type": 1 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.791 "dma_device_type": 2 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "dma_device_id": "system", 00:10:28.791 "dma_device_type": 1 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.791 "dma_device_type": 2 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "dma_device_id": "system", 00:10:28.791 "dma_device_type": 1 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.791 "dma_device_type": 2 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "dma_device_id": "system", 00:10:28.791 "dma_device_type": 1 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.791 "dma_device_type": 2 00:10:28.791 } 00:10:28.791 ], 00:10:28.791 "driver_specific": { 00:10:28.791 "raid": { 00:10:28.791 "uuid": "6e646fa9-9c71-45a0-b7dd-705a91695a24", 00:10:28.791 "strip_size_kb": 0, 00:10:28.791 "state": "online", 00:10:28.791 "raid_level": "raid1", 00:10:28.791 "superblock": false, 00:10:28.791 "num_base_bdevs": 4, 00:10:28.791 "num_base_bdevs_discovered": 4, 00:10:28.791 "num_base_bdevs_operational": 4, 00:10:28.791 "base_bdevs_list": [ 00:10:28.791 { 00:10:28.791 "name": "BaseBdev1", 00:10:28.791 "uuid": "b4d1bf66-e843-4549-b44c-a4dd267dd4e9", 00:10:28.791 "is_configured": true, 00:10:28.791 "data_offset": 0, 00:10:28.791 "data_size": 65536 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "name": "BaseBdev2", 00:10:28.791 "uuid": "d3c0911c-4449-413b-85b0-20d81e99290c", 00:10:28.791 "is_configured": true, 00:10:28.791 "data_offset": 0, 00:10:28.791 "data_size": 65536 00:10:28.791 }, 00:10:28.791 { 00:10:28.791 "name": "BaseBdev3", 00:10:28.791 "uuid": "6a6a5131-c866-4f76-8dd2-aec7373dcb8d", 00:10:28.791 "is_configured": true, 00:10:28.791 "data_offset": 0, 00:10:28.791 "data_size": 65536 00:10:28.792 }, 00:10:28.792 { 00:10:28.792 "name": "BaseBdev4", 00:10:28.792 "uuid": "52560eb7-9f56-4501-a15a-3706565a7d82", 00:10:28.792 "is_configured": true, 00:10:28.792 "data_offset": 0, 00:10:28.792 "data_size": 65536 00:10:28.792 } 00:10:28.792 ] 00:10:28.792 } 00:10:28.792 } 00:10:28.792 }' 00:10:28.792 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.050 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.050 BaseBdev2 00:10:29.050 BaseBdev3 00:10:29.050 BaseBdev4' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.051 [2024-11-26 15:26:27.485308] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.051 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.310 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.310 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.310 "name": "Existed_Raid", 00:10:29.310 "uuid": "6e646fa9-9c71-45a0-b7dd-705a91695a24", 00:10:29.310 "strip_size_kb": 0, 00:10:29.310 "state": "online", 00:10:29.310 "raid_level": "raid1", 00:10:29.310 "superblock": false, 00:10:29.310 "num_base_bdevs": 4, 00:10:29.310 "num_base_bdevs_discovered": 3, 00:10:29.310 "num_base_bdevs_operational": 3, 00:10:29.310 "base_bdevs_list": [ 00:10:29.310 { 00:10:29.310 "name": null, 00:10:29.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.310 "is_configured": false, 00:10:29.310 "data_offset": 0, 00:10:29.311 "data_size": 65536 00:10:29.311 }, 00:10:29.311 { 00:10:29.311 "name": "BaseBdev2", 00:10:29.311 "uuid": "d3c0911c-4449-413b-85b0-20d81e99290c", 00:10:29.311 "is_configured": true, 00:10:29.311 "data_offset": 0, 00:10:29.311 "data_size": 65536 00:10:29.311 }, 00:10:29.311 { 00:10:29.311 "name": "BaseBdev3", 00:10:29.311 "uuid": "6a6a5131-c866-4f76-8dd2-aec7373dcb8d", 00:10:29.311 "is_configured": true, 00:10:29.311 "data_offset": 0, 00:10:29.311 "data_size": 65536 00:10:29.311 }, 00:10:29.311 { 00:10:29.311 "name": "BaseBdev4", 00:10:29.311 "uuid": "52560eb7-9f56-4501-a15a-3706565a7d82", 00:10:29.311 "is_configured": true, 00:10:29.311 "data_offset": 0, 00:10:29.311 "data_size": 65536 00:10:29.311 } 00:10:29.311 ] 00:10:29.311 }' 00:10:29.311 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.311 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.570 [2024-11-26 15:26:27.960678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.570 15:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.570 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.570 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.570 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.570 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.570 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.570 [2024-11-26 15:26:28.031897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 [2024-11-26 15:26:28.087050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:29.831 [2024-11-26 15:26:28.087149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.831 [2024-11-26 15:26:28.098421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.831 [2024-11-26 15:26:28.098473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.831 [2024-11-26 15:26:28.098497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 BaseBdev2 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 [ 00:10:29.831 { 00:10:29.831 "name": "BaseBdev2", 00:10:29.831 "aliases": [ 00:10:29.831 "93b355a6-7b55-48a1-878e-160ff0c6f8c5" 00:10:29.831 ], 00:10:29.831 "product_name": "Malloc disk", 00:10:29.831 "block_size": 512, 00:10:29.831 "num_blocks": 65536, 00:10:29.831 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:29.831 "assigned_rate_limits": { 00:10:29.831 "rw_ios_per_sec": 0, 00:10:29.831 "rw_mbytes_per_sec": 0, 00:10:29.831 "r_mbytes_per_sec": 0, 00:10:29.831 "w_mbytes_per_sec": 0 00:10:29.831 }, 00:10:29.831 "claimed": false, 00:10:29.831 "zoned": false, 00:10:29.831 "supported_io_types": { 00:10:29.831 "read": true, 00:10:29.831 "write": true, 00:10:29.831 "unmap": true, 00:10:29.831 "flush": true, 00:10:29.831 "reset": true, 00:10:29.831 "nvme_admin": false, 00:10:29.831 "nvme_io": false, 00:10:29.831 "nvme_io_md": false, 00:10:29.831 "write_zeroes": true, 00:10:29.831 "zcopy": true, 00:10:29.831 "get_zone_info": false, 00:10:29.831 "zone_management": false, 00:10:29.831 "zone_append": false, 00:10:29.831 "compare": false, 00:10:29.831 "compare_and_write": false, 00:10:29.831 "abort": true, 00:10:29.831 "seek_hole": false, 00:10:29.831 "seek_data": false, 00:10:29.831 "copy": true, 00:10:29.831 "nvme_iov_md": false 00:10:29.831 }, 00:10:29.831 "memory_domains": [ 00:10:29.831 { 00:10:29.831 "dma_device_id": "system", 00:10:29.831 "dma_device_type": 1 00:10:29.831 }, 00:10:29.831 { 00:10:29.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.831 "dma_device_type": 2 00:10:29.831 } 00:10:29.831 ], 00:10:29.831 "driver_specific": {} 00:10:29.831 } 00:10:29.831 ] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.831 BaseBdev3 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.831 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 [ 00:10:29.832 { 00:10:29.832 "name": "BaseBdev3", 00:10:29.832 "aliases": [ 00:10:29.832 "d79aad89-370c-4a4b-96b9-472eaddb593f" 00:10:29.832 ], 00:10:29.832 "product_name": "Malloc disk", 00:10:29.832 "block_size": 512, 00:10:29.832 "num_blocks": 65536, 00:10:29.832 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:29.832 "assigned_rate_limits": { 00:10:29.832 "rw_ios_per_sec": 0, 00:10:29.832 "rw_mbytes_per_sec": 0, 00:10:29.832 "r_mbytes_per_sec": 0, 00:10:29.832 "w_mbytes_per_sec": 0 00:10:29.832 }, 00:10:29.832 "claimed": false, 00:10:29.832 "zoned": false, 00:10:29.832 "supported_io_types": { 00:10:29.832 "read": true, 00:10:29.832 "write": true, 00:10:29.832 "unmap": true, 00:10:29.832 "flush": true, 00:10:29.832 "reset": true, 00:10:29.832 "nvme_admin": false, 00:10:29.832 "nvme_io": false, 00:10:29.832 "nvme_io_md": false, 00:10:29.832 "write_zeroes": true, 00:10:29.832 "zcopy": true, 00:10:29.832 "get_zone_info": false, 00:10:29.832 "zone_management": false, 00:10:29.832 "zone_append": false, 00:10:29.832 "compare": false, 00:10:29.832 "compare_and_write": false, 00:10:29.832 "abort": true, 00:10:29.832 "seek_hole": false, 00:10:29.832 "seek_data": false, 00:10:29.832 "copy": true, 00:10:29.832 "nvme_iov_md": false 00:10:29.832 }, 00:10:29.832 "memory_domains": [ 00:10:29.832 { 00:10:29.832 "dma_device_id": "system", 00:10:29.832 "dma_device_type": 1 00:10:29.832 }, 00:10:29.832 { 00:10:29.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.832 "dma_device_type": 2 00:10:29.832 } 00:10:29.832 ], 00:10:29.832 "driver_specific": {} 00:10:29.832 } 00:10:29.832 ] 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 BaseBdev4 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.832 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.832 [ 00:10:29.832 { 00:10:29.832 "name": "BaseBdev4", 00:10:29.832 "aliases": [ 00:10:29.832 "71148908-6202-4183-bf89-aa2e96bca9d4" 00:10:29.832 ], 00:10:29.832 "product_name": "Malloc disk", 00:10:29.832 "block_size": 512, 00:10:29.832 "num_blocks": 65536, 00:10:29.832 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:29.832 "assigned_rate_limits": { 00:10:29.832 "rw_ios_per_sec": 0, 00:10:29.832 "rw_mbytes_per_sec": 0, 00:10:29.832 "r_mbytes_per_sec": 0, 00:10:29.832 "w_mbytes_per_sec": 0 00:10:29.832 }, 00:10:29.832 "claimed": false, 00:10:29.832 "zoned": false, 00:10:29.832 "supported_io_types": { 00:10:29.832 "read": true, 00:10:29.832 "write": true, 00:10:29.832 "unmap": true, 00:10:29.832 "flush": true, 00:10:29.832 "reset": true, 00:10:29.832 "nvme_admin": false, 00:10:29.832 "nvme_io": false, 00:10:29.832 "nvme_io_md": false, 00:10:29.832 "write_zeroes": true, 00:10:29.832 "zcopy": true, 00:10:29.832 "get_zone_info": false, 00:10:30.092 "zone_management": false, 00:10:30.092 "zone_append": false, 00:10:30.092 "compare": false, 00:10:30.092 "compare_and_write": false, 00:10:30.092 "abort": true, 00:10:30.092 "seek_hole": false, 00:10:30.092 "seek_data": false, 00:10:30.092 "copy": true, 00:10:30.092 "nvme_iov_md": false 00:10:30.092 }, 00:10:30.092 "memory_domains": [ 00:10:30.092 { 00:10:30.092 "dma_device_id": "system", 00:10:30.092 "dma_device_type": 1 00:10:30.092 }, 00:10:30.092 { 00:10:30.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.092 "dma_device_type": 2 00:10:30.092 } 00:10:30.092 ], 00:10:30.092 "driver_specific": {} 00:10:30.092 } 00:10:30.092 ] 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.092 [2024-11-26 15:26:28.315763] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.092 [2024-11-26 15:26:28.315811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.092 [2024-11-26 15:26:28.315832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.092 [2024-11-26 15:26:28.317677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.092 [2024-11-26 15:26:28.317744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.092 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.092 "name": "Existed_Raid", 00:10:30.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.092 "strip_size_kb": 0, 00:10:30.092 "state": "configuring", 00:10:30.092 "raid_level": "raid1", 00:10:30.092 "superblock": false, 00:10:30.092 "num_base_bdevs": 4, 00:10:30.092 "num_base_bdevs_discovered": 3, 00:10:30.092 "num_base_bdevs_operational": 4, 00:10:30.092 "base_bdevs_list": [ 00:10:30.092 { 00:10:30.092 "name": "BaseBdev1", 00:10:30.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.092 "is_configured": false, 00:10:30.092 "data_offset": 0, 00:10:30.092 "data_size": 0 00:10:30.092 }, 00:10:30.092 { 00:10:30.092 "name": "BaseBdev2", 00:10:30.092 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:30.092 "is_configured": true, 00:10:30.092 "data_offset": 0, 00:10:30.093 "data_size": 65536 00:10:30.093 }, 00:10:30.093 { 00:10:30.093 "name": "BaseBdev3", 00:10:30.093 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:30.093 "is_configured": true, 00:10:30.093 "data_offset": 0, 00:10:30.093 "data_size": 65536 00:10:30.093 }, 00:10:30.093 { 00:10:30.093 "name": "BaseBdev4", 00:10:30.093 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:30.093 "is_configured": true, 00:10:30.093 "data_offset": 0, 00:10:30.093 "data_size": 65536 00:10:30.093 } 00:10:30.093 ] 00:10:30.093 }' 00:10:30.093 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.093 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.352 [2024-11-26 15:26:28.675860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.352 "name": "Existed_Raid", 00:10:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.352 "strip_size_kb": 0, 00:10:30.352 "state": "configuring", 00:10:30.352 "raid_level": "raid1", 00:10:30.352 "superblock": false, 00:10:30.352 "num_base_bdevs": 4, 00:10:30.352 "num_base_bdevs_discovered": 2, 00:10:30.352 "num_base_bdevs_operational": 4, 00:10:30.352 "base_bdevs_list": [ 00:10:30.352 { 00:10:30.352 "name": "BaseBdev1", 00:10:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.352 "is_configured": false, 00:10:30.352 "data_offset": 0, 00:10:30.352 "data_size": 0 00:10:30.352 }, 00:10:30.352 { 00:10:30.352 "name": null, 00:10:30.352 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:30.352 "is_configured": false, 00:10:30.352 "data_offset": 0, 00:10:30.352 "data_size": 65536 00:10:30.352 }, 00:10:30.352 { 00:10:30.352 "name": "BaseBdev3", 00:10:30.352 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:30.352 "is_configured": true, 00:10:30.352 "data_offset": 0, 00:10:30.352 "data_size": 65536 00:10:30.352 }, 00:10:30.352 { 00:10:30.352 "name": "BaseBdev4", 00:10:30.352 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:30.352 "is_configured": true, 00:10:30.352 "data_offset": 0, 00:10:30.352 "data_size": 65536 00:10:30.352 } 00:10:30.352 ] 00:10:30.352 }' 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.352 15:26:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 [2024-11-26 15:26:29.179100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.920 BaseBdev1 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.920 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.920 [ 00:10:30.920 { 00:10:30.920 "name": "BaseBdev1", 00:10:30.920 "aliases": [ 00:10:30.920 "6f16afd8-858a-43f0-83fe-d9b4a14fe995" 00:10:30.920 ], 00:10:30.920 "product_name": "Malloc disk", 00:10:30.920 "block_size": 512, 00:10:30.920 "num_blocks": 65536, 00:10:30.920 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:30.920 "assigned_rate_limits": { 00:10:30.920 "rw_ios_per_sec": 0, 00:10:30.920 "rw_mbytes_per_sec": 0, 00:10:30.920 "r_mbytes_per_sec": 0, 00:10:30.920 "w_mbytes_per_sec": 0 00:10:30.920 }, 00:10:30.920 "claimed": true, 00:10:30.920 "claim_type": "exclusive_write", 00:10:30.920 "zoned": false, 00:10:30.920 "supported_io_types": { 00:10:30.920 "read": true, 00:10:30.920 "write": true, 00:10:30.920 "unmap": true, 00:10:30.920 "flush": true, 00:10:30.920 "reset": true, 00:10:30.920 "nvme_admin": false, 00:10:30.920 "nvme_io": false, 00:10:30.920 "nvme_io_md": false, 00:10:30.920 "write_zeroes": true, 00:10:30.920 "zcopy": true, 00:10:30.920 "get_zone_info": false, 00:10:30.920 "zone_management": false, 00:10:30.920 "zone_append": false, 00:10:30.920 "compare": false, 00:10:30.920 "compare_and_write": false, 00:10:30.920 "abort": true, 00:10:30.920 "seek_hole": false, 00:10:30.920 "seek_data": false, 00:10:30.920 "copy": true, 00:10:30.920 "nvme_iov_md": false 00:10:30.920 }, 00:10:30.920 "memory_domains": [ 00:10:30.920 { 00:10:30.920 "dma_device_id": "system", 00:10:30.920 "dma_device_type": 1 00:10:30.920 }, 00:10:30.920 { 00:10:30.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.920 "dma_device_type": 2 00:10:30.920 } 00:10:30.920 ], 00:10:30.920 "driver_specific": {} 00:10:30.920 } 00:10:30.920 ] 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.921 "name": "Existed_Raid", 00:10:30.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.921 "strip_size_kb": 0, 00:10:30.921 "state": "configuring", 00:10:30.921 "raid_level": "raid1", 00:10:30.921 "superblock": false, 00:10:30.921 "num_base_bdevs": 4, 00:10:30.921 "num_base_bdevs_discovered": 3, 00:10:30.921 "num_base_bdevs_operational": 4, 00:10:30.921 "base_bdevs_list": [ 00:10:30.921 { 00:10:30.921 "name": "BaseBdev1", 00:10:30.921 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:30.921 "is_configured": true, 00:10:30.921 "data_offset": 0, 00:10:30.921 "data_size": 65536 00:10:30.921 }, 00:10:30.921 { 00:10:30.921 "name": null, 00:10:30.921 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:30.921 "is_configured": false, 00:10:30.921 "data_offset": 0, 00:10:30.921 "data_size": 65536 00:10:30.921 }, 00:10:30.921 { 00:10:30.921 "name": "BaseBdev3", 00:10:30.921 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:30.921 "is_configured": true, 00:10:30.921 "data_offset": 0, 00:10:30.921 "data_size": 65536 00:10:30.921 }, 00:10:30.921 { 00:10:30.921 "name": "BaseBdev4", 00:10:30.921 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:30.921 "is_configured": true, 00:10:30.921 "data_offset": 0, 00:10:30.921 "data_size": 65536 00:10:30.921 } 00:10:30.921 ] 00:10:30.921 }' 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.921 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.180 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.180 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.180 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.180 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.439 [2024-11-26 15:26:29.699307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.439 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.440 "name": "Existed_Raid", 00:10:31.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.440 "strip_size_kb": 0, 00:10:31.440 "state": "configuring", 00:10:31.440 "raid_level": "raid1", 00:10:31.440 "superblock": false, 00:10:31.440 "num_base_bdevs": 4, 00:10:31.440 "num_base_bdevs_discovered": 2, 00:10:31.440 "num_base_bdevs_operational": 4, 00:10:31.440 "base_bdevs_list": [ 00:10:31.440 { 00:10:31.440 "name": "BaseBdev1", 00:10:31.440 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:31.440 "is_configured": true, 00:10:31.440 "data_offset": 0, 00:10:31.440 "data_size": 65536 00:10:31.440 }, 00:10:31.440 { 00:10:31.440 "name": null, 00:10:31.440 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:31.440 "is_configured": false, 00:10:31.440 "data_offset": 0, 00:10:31.440 "data_size": 65536 00:10:31.440 }, 00:10:31.440 { 00:10:31.440 "name": null, 00:10:31.440 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:31.440 "is_configured": false, 00:10:31.440 "data_offset": 0, 00:10:31.440 "data_size": 65536 00:10:31.440 }, 00:10:31.440 { 00:10:31.440 "name": "BaseBdev4", 00:10:31.440 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:31.440 "is_configured": true, 00:10:31.440 "data_offset": 0, 00:10:31.440 "data_size": 65536 00:10:31.440 } 00:10:31.440 ] 00:10:31.440 }' 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.440 15:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.698 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.698 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.698 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.698 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.698 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.699 [2024-11-26 15:26:30.107471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.699 "name": "Existed_Raid", 00:10:31.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.699 "strip_size_kb": 0, 00:10:31.699 "state": "configuring", 00:10:31.699 "raid_level": "raid1", 00:10:31.699 "superblock": false, 00:10:31.699 "num_base_bdevs": 4, 00:10:31.699 "num_base_bdevs_discovered": 3, 00:10:31.699 "num_base_bdevs_operational": 4, 00:10:31.699 "base_bdevs_list": [ 00:10:31.699 { 00:10:31.699 "name": "BaseBdev1", 00:10:31.699 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:31.699 "is_configured": true, 00:10:31.699 "data_offset": 0, 00:10:31.699 "data_size": 65536 00:10:31.699 }, 00:10:31.699 { 00:10:31.699 "name": null, 00:10:31.699 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:31.699 "is_configured": false, 00:10:31.699 "data_offset": 0, 00:10:31.699 "data_size": 65536 00:10:31.699 }, 00:10:31.699 { 00:10:31.699 "name": "BaseBdev3", 00:10:31.699 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:31.699 "is_configured": true, 00:10:31.699 "data_offset": 0, 00:10:31.699 "data_size": 65536 00:10:31.699 }, 00:10:31.699 { 00:10:31.699 "name": "BaseBdev4", 00:10:31.699 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:31.699 "is_configured": true, 00:10:31.699 "data_offset": 0, 00:10:31.699 "data_size": 65536 00:10:31.699 } 00:10:31.699 ] 00:10:31.699 }' 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.699 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.267 [2024-11-26 15:26:30.571589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.267 "name": "Existed_Raid", 00:10:32.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.267 "strip_size_kb": 0, 00:10:32.267 "state": "configuring", 00:10:32.267 "raid_level": "raid1", 00:10:32.267 "superblock": false, 00:10:32.267 "num_base_bdevs": 4, 00:10:32.267 "num_base_bdevs_discovered": 2, 00:10:32.267 "num_base_bdevs_operational": 4, 00:10:32.267 "base_bdevs_list": [ 00:10:32.267 { 00:10:32.267 "name": null, 00:10:32.267 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:32.267 "is_configured": false, 00:10:32.267 "data_offset": 0, 00:10:32.267 "data_size": 65536 00:10:32.267 }, 00:10:32.267 { 00:10:32.267 "name": null, 00:10:32.267 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:32.267 "is_configured": false, 00:10:32.267 "data_offset": 0, 00:10:32.267 "data_size": 65536 00:10:32.267 }, 00:10:32.267 { 00:10:32.267 "name": "BaseBdev3", 00:10:32.267 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:32.267 "is_configured": true, 00:10:32.267 "data_offset": 0, 00:10:32.267 "data_size": 65536 00:10:32.267 }, 00:10:32.267 { 00:10:32.267 "name": "BaseBdev4", 00:10:32.267 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:32.267 "is_configured": true, 00:10:32.267 "data_offset": 0, 00:10:32.267 "data_size": 65536 00:10:32.267 } 00:10:32.267 ] 00:10:32.267 }' 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.267 15:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.861 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.862 [2024-11-26 15:26:31.054105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.862 "name": "Existed_Raid", 00:10:32.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.862 "strip_size_kb": 0, 00:10:32.862 "state": "configuring", 00:10:32.862 "raid_level": "raid1", 00:10:32.862 "superblock": false, 00:10:32.862 "num_base_bdevs": 4, 00:10:32.862 "num_base_bdevs_discovered": 3, 00:10:32.862 "num_base_bdevs_operational": 4, 00:10:32.862 "base_bdevs_list": [ 00:10:32.862 { 00:10:32.862 "name": null, 00:10:32.862 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:32.862 "is_configured": false, 00:10:32.862 "data_offset": 0, 00:10:32.862 "data_size": 65536 00:10:32.862 }, 00:10:32.862 { 00:10:32.862 "name": "BaseBdev2", 00:10:32.862 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:32.862 "is_configured": true, 00:10:32.862 "data_offset": 0, 00:10:32.862 "data_size": 65536 00:10:32.862 }, 00:10:32.862 { 00:10:32.862 "name": "BaseBdev3", 00:10:32.862 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:32.862 "is_configured": true, 00:10:32.862 "data_offset": 0, 00:10:32.862 "data_size": 65536 00:10:32.862 }, 00:10:32.862 { 00:10:32.862 "name": "BaseBdev4", 00:10:32.862 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:32.862 "is_configured": true, 00:10:32.862 "data_offset": 0, 00:10:32.862 "data_size": 65536 00:10:32.862 } 00:10:32.862 ] 00:10:32.862 }' 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.862 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.123 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.123 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.123 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.123 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.123 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6f16afd8-858a-43f0-83fe-d9b4a14fe995 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.124 [2024-11-26 15:26:31.557294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.124 [2024-11-26 15:26:31.557344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:33.124 [2024-11-26 15:26:31.557352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:33.124 [2024-11-26 15:26:31.557597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:33.124 [2024-11-26 15:26:31.557734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:33.124 [2024-11-26 15:26:31.557758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:33.124 [2024-11-26 15:26:31.557936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.124 NewBaseBdev 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.124 [ 00:10:33.124 { 00:10:33.124 "name": "NewBaseBdev", 00:10:33.124 "aliases": [ 00:10:33.124 "6f16afd8-858a-43f0-83fe-d9b4a14fe995" 00:10:33.124 ], 00:10:33.124 "product_name": "Malloc disk", 00:10:33.124 "block_size": 512, 00:10:33.124 "num_blocks": 65536, 00:10:33.124 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:33.124 "assigned_rate_limits": { 00:10:33.124 "rw_ios_per_sec": 0, 00:10:33.124 "rw_mbytes_per_sec": 0, 00:10:33.124 "r_mbytes_per_sec": 0, 00:10:33.124 "w_mbytes_per_sec": 0 00:10:33.124 }, 00:10:33.124 "claimed": true, 00:10:33.124 "claim_type": "exclusive_write", 00:10:33.124 "zoned": false, 00:10:33.124 "supported_io_types": { 00:10:33.124 "read": true, 00:10:33.124 "write": true, 00:10:33.124 "unmap": true, 00:10:33.124 "flush": true, 00:10:33.124 "reset": true, 00:10:33.124 "nvme_admin": false, 00:10:33.124 "nvme_io": false, 00:10:33.124 "nvme_io_md": false, 00:10:33.124 "write_zeroes": true, 00:10:33.124 "zcopy": true, 00:10:33.124 "get_zone_info": false, 00:10:33.124 "zone_management": false, 00:10:33.124 "zone_append": false, 00:10:33.124 "compare": false, 00:10:33.124 "compare_and_write": false, 00:10:33.124 "abort": true, 00:10:33.124 "seek_hole": false, 00:10:33.124 "seek_data": false, 00:10:33.124 "copy": true, 00:10:33.124 "nvme_iov_md": false 00:10:33.124 }, 00:10:33.124 "memory_domains": [ 00:10:33.124 { 00:10:33.124 "dma_device_id": "system", 00:10:33.124 "dma_device_type": 1 00:10:33.124 }, 00:10:33.124 { 00:10:33.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.124 "dma_device_type": 2 00:10:33.124 } 00:10:33.124 ], 00:10:33.124 "driver_specific": {} 00:10:33.124 } 00:10:33.124 ] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.124 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.384 "name": "Existed_Raid", 00:10:33.384 "uuid": "6f987d33-e295-4202-9e0d-62934e3fdf2c", 00:10:33.384 "strip_size_kb": 0, 00:10:33.384 "state": "online", 00:10:33.384 "raid_level": "raid1", 00:10:33.384 "superblock": false, 00:10:33.384 "num_base_bdevs": 4, 00:10:33.384 "num_base_bdevs_discovered": 4, 00:10:33.384 "num_base_bdevs_operational": 4, 00:10:33.384 "base_bdevs_list": [ 00:10:33.384 { 00:10:33.384 "name": "NewBaseBdev", 00:10:33.384 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:33.384 "is_configured": true, 00:10:33.384 "data_offset": 0, 00:10:33.384 "data_size": 65536 00:10:33.384 }, 00:10:33.384 { 00:10:33.384 "name": "BaseBdev2", 00:10:33.384 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:33.384 "is_configured": true, 00:10:33.384 "data_offset": 0, 00:10:33.384 "data_size": 65536 00:10:33.384 }, 00:10:33.384 { 00:10:33.384 "name": "BaseBdev3", 00:10:33.384 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:33.384 "is_configured": true, 00:10:33.384 "data_offset": 0, 00:10:33.384 "data_size": 65536 00:10:33.384 }, 00:10:33.384 { 00:10:33.384 "name": "BaseBdev4", 00:10:33.384 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:33.384 "is_configured": true, 00:10:33.384 "data_offset": 0, 00:10:33.384 "data_size": 65536 00:10:33.384 } 00:10:33.384 ] 00:10:33.384 }' 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.384 15:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.643 [2024-11-26 15:26:32.021804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.643 "name": "Existed_Raid", 00:10:33.643 "aliases": [ 00:10:33.643 "6f987d33-e295-4202-9e0d-62934e3fdf2c" 00:10:33.643 ], 00:10:33.643 "product_name": "Raid Volume", 00:10:33.643 "block_size": 512, 00:10:33.643 "num_blocks": 65536, 00:10:33.643 "uuid": "6f987d33-e295-4202-9e0d-62934e3fdf2c", 00:10:33.643 "assigned_rate_limits": { 00:10:33.643 "rw_ios_per_sec": 0, 00:10:33.643 "rw_mbytes_per_sec": 0, 00:10:33.643 "r_mbytes_per_sec": 0, 00:10:33.643 "w_mbytes_per_sec": 0 00:10:33.643 }, 00:10:33.643 "claimed": false, 00:10:33.643 "zoned": false, 00:10:33.643 "supported_io_types": { 00:10:33.643 "read": true, 00:10:33.643 "write": true, 00:10:33.643 "unmap": false, 00:10:33.643 "flush": false, 00:10:33.643 "reset": true, 00:10:33.643 "nvme_admin": false, 00:10:33.643 "nvme_io": false, 00:10:33.643 "nvme_io_md": false, 00:10:33.643 "write_zeroes": true, 00:10:33.643 "zcopy": false, 00:10:33.643 "get_zone_info": false, 00:10:33.643 "zone_management": false, 00:10:33.643 "zone_append": false, 00:10:33.643 "compare": false, 00:10:33.643 "compare_and_write": false, 00:10:33.643 "abort": false, 00:10:33.643 "seek_hole": false, 00:10:33.643 "seek_data": false, 00:10:33.643 "copy": false, 00:10:33.643 "nvme_iov_md": false 00:10:33.643 }, 00:10:33.643 "memory_domains": [ 00:10:33.643 { 00:10:33.643 "dma_device_id": "system", 00:10:33.643 "dma_device_type": 1 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.643 "dma_device_type": 2 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "dma_device_id": "system", 00:10:33.643 "dma_device_type": 1 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.643 "dma_device_type": 2 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "dma_device_id": "system", 00:10:33.643 "dma_device_type": 1 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.643 "dma_device_type": 2 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "dma_device_id": "system", 00:10:33.643 "dma_device_type": 1 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.643 "dma_device_type": 2 00:10:33.643 } 00:10:33.643 ], 00:10:33.643 "driver_specific": { 00:10:33.643 "raid": { 00:10:33.643 "uuid": "6f987d33-e295-4202-9e0d-62934e3fdf2c", 00:10:33.643 "strip_size_kb": 0, 00:10:33.643 "state": "online", 00:10:33.643 "raid_level": "raid1", 00:10:33.643 "superblock": false, 00:10:33.643 "num_base_bdevs": 4, 00:10:33.643 "num_base_bdevs_discovered": 4, 00:10:33.643 "num_base_bdevs_operational": 4, 00:10:33.643 "base_bdevs_list": [ 00:10:33.643 { 00:10:33.643 "name": "NewBaseBdev", 00:10:33.643 "uuid": "6f16afd8-858a-43f0-83fe-d9b4a14fe995", 00:10:33.643 "is_configured": true, 00:10:33.643 "data_offset": 0, 00:10:33.643 "data_size": 65536 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "name": "BaseBdev2", 00:10:33.643 "uuid": "93b355a6-7b55-48a1-878e-160ff0c6f8c5", 00:10:33.643 "is_configured": true, 00:10:33.643 "data_offset": 0, 00:10:33.643 "data_size": 65536 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "name": "BaseBdev3", 00:10:33.643 "uuid": "d79aad89-370c-4a4b-96b9-472eaddb593f", 00:10:33.643 "is_configured": true, 00:10:33.643 "data_offset": 0, 00:10:33.643 "data_size": 65536 00:10:33.643 }, 00:10:33.643 { 00:10:33.643 "name": "BaseBdev4", 00:10:33.643 "uuid": "71148908-6202-4183-bf89-aa2e96bca9d4", 00:10:33.643 "is_configured": true, 00:10:33.643 "data_offset": 0, 00:10:33.643 "data_size": 65536 00:10:33.643 } 00:10:33.643 ] 00:10:33.643 } 00:10:33.643 } 00:10:33.643 }' 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.643 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:33.643 BaseBdev2 00:10:33.643 BaseBdev3 00:10:33.643 BaseBdev4' 00:10:33.644 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 [2024-11-26 15:26:32.297527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.903 [2024-11-26 15:26:32.297561] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.903 [2024-11-26 15:26:32.297632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.903 [2024-11-26 15:26:32.297888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.903 [2024-11-26 15:26:32.297907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85537 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 85537 ']' 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 85537 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:33.903 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.904 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85537 00:10:33.904 killing process with pid 85537 00:10:33.904 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.904 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.904 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85537' 00:10:33.904 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 85537 00:10:33.904 [2024-11-26 15:26:32.345768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.904 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 85537 00:10:34.163 [2024-11-26 15:26:32.386053] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.163 15:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:34.163 00:10:34.163 real 0m9.085s 00:10:34.163 user 0m15.503s 00:10:34.163 sys 0m1.911s 00:10:34.163 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.163 15:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.163 ************************************ 00:10:34.163 END TEST raid_state_function_test 00:10:34.163 ************************************ 00:10:34.423 15:26:32 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:34.423 15:26:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.423 15:26:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.423 15:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.423 ************************************ 00:10:34.423 START TEST raid_state_function_test_sb 00:10:34.423 ************************************ 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86182 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86182' 00:10:34.423 Process raid pid: 86182 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86182 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86182 ']' 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.423 15:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.423 [2024-11-26 15:26:32.773552] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:34.423 [2024-11-26 15:26:32.773681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.683 [2024-11-26 15:26:32.909655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:34.683 [2024-11-26 15:26:32.945328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.683 [2024-11-26 15:26:32.971268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.683 [2024-11-26 15:26:33.013722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.683 [2024-11-26 15:26:33.013758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.252 [2024-11-26 15:26:33.596569] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.252 [2024-11-26 15:26:33.596622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.252 [2024-11-26 15:26:33.596634] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.252 [2024-11-26 15:26:33.596642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.252 [2024-11-26 15:26:33.596652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.252 [2024-11-26 15:26:33.596659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.252 [2024-11-26 15:26:33.596669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.252 [2024-11-26 15:26:33.596675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.252 "name": "Existed_Raid", 00:10:35.252 "uuid": "0b59d5bb-12c8-4d29-a3f3-d5999cda06b1", 00:10:35.252 "strip_size_kb": 0, 00:10:35.252 "state": "configuring", 00:10:35.252 "raid_level": "raid1", 00:10:35.252 "superblock": true, 00:10:35.252 "num_base_bdevs": 4, 00:10:35.252 "num_base_bdevs_discovered": 0, 00:10:35.252 "num_base_bdevs_operational": 4, 00:10:35.252 "base_bdevs_list": [ 00:10:35.252 { 00:10:35.252 "name": "BaseBdev1", 00:10:35.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.252 "is_configured": false, 00:10:35.252 "data_offset": 0, 00:10:35.252 "data_size": 0 00:10:35.252 }, 00:10:35.252 { 00:10:35.252 "name": "BaseBdev2", 00:10:35.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.252 "is_configured": false, 00:10:35.252 "data_offset": 0, 00:10:35.252 "data_size": 0 00:10:35.252 }, 00:10:35.252 { 00:10:35.252 "name": "BaseBdev3", 00:10:35.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.252 "is_configured": false, 00:10:35.252 "data_offset": 0, 00:10:35.252 "data_size": 0 00:10:35.252 }, 00:10:35.252 { 00:10:35.252 "name": "BaseBdev4", 00:10:35.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.252 "is_configured": false, 00:10:35.252 "data_offset": 0, 00:10:35.252 "data_size": 0 00:10:35.252 } 00:10:35.252 ] 00:10:35.252 }' 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.252 15:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.822 [2024-11-26 15:26:34.008572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.822 [2024-11-26 15:26:34.008608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.822 [2024-11-26 15:26:34.020598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.822 [2024-11-26 15:26:34.020640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.822 [2024-11-26 15:26:34.020650] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.822 [2024-11-26 15:26:34.020657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.822 [2024-11-26 15:26:34.020665] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.822 [2024-11-26 15:26:34.020671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.822 [2024-11-26 15:26:34.020679] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.822 [2024-11-26 15:26:34.020692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.822 [2024-11-26 15:26:34.041502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.822 BaseBdev1 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.822 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.822 [ 00:10:35.822 { 00:10:35.822 "name": "BaseBdev1", 00:10:35.822 "aliases": [ 00:10:35.822 "557b8f46-6956-4bd6-8cd8-dac5e07adf59" 00:10:35.822 ], 00:10:35.822 "product_name": "Malloc disk", 00:10:35.822 "block_size": 512, 00:10:35.822 "num_blocks": 65536, 00:10:35.822 "uuid": "557b8f46-6956-4bd6-8cd8-dac5e07adf59", 00:10:35.822 "assigned_rate_limits": { 00:10:35.822 "rw_ios_per_sec": 0, 00:10:35.822 "rw_mbytes_per_sec": 0, 00:10:35.822 "r_mbytes_per_sec": 0, 00:10:35.822 "w_mbytes_per_sec": 0 00:10:35.822 }, 00:10:35.822 "claimed": true, 00:10:35.822 "claim_type": "exclusive_write", 00:10:35.822 "zoned": false, 00:10:35.822 "supported_io_types": { 00:10:35.822 "read": true, 00:10:35.822 "write": true, 00:10:35.822 "unmap": true, 00:10:35.822 "flush": true, 00:10:35.822 "reset": true, 00:10:35.822 "nvme_admin": false, 00:10:35.822 "nvme_io": false, 00:10:35.822 "nvme_io_md": false, 00:10:35.822 "write_zeroes": true, 00:10:35.822 "zcopy": true, 00:10:35.822 "get_zone_info": false, 00:10:35.822 "zone_management": false, 00:10:35.822 "zone_append": false, 00:10:35.822 "compare": false, 00:10:35.822 "compare_and_write": false, 00:10:35.822 "abort": true, 00:10:35.822 "seek_hole": false, 00:10:35.822 "seek_data": false, 00:10:35.822 "copy": true, 00:10:35.822 "nvme_iov_md": false 00:10:35.822 }, 00:10:35.822 "memory_domains": [ 00:10:35.822 { 00:10:35.822 "dma_device_id": "system", 00:10:35.822 "dma_device_type": 1 00:10:35.822 }, 00:10:35.822 { 00:10:35.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.823 "dma_device_type": 2 00:10:35.823 } 00:10:35.823 ], 00:10:35.823 "driver_specific": {} 00:10:35.823 } 00:10:35.823 ] 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.823 "name": "Existed_Raid", 00:10:35.823 "uuid": "64006639-a7da-493a-9dbe-72d9e44bfa03", 00:10:35.823 "strip_size_kb": 0, 00:10:35.823 "state": "configuring", 00:10:35.823 "raid_level": "raid1", 00:10:35.823 "superblock": true, 00:10:35.823 "num_base_bdevs": 4, 00:10:35.823 "num_base_bdevs_discovered": 1, 00:10:35.823 "num_base_bdevs_operational": 4, 00:10:35.823 "base_bdevs_list": [ 00:10:35.823 { 00:10:35.823 "name": "BaseBdev1", 00:10:35.823 "uuid": "557b8f46-6956-4bd6-8cd8-dac5e07adf59", 00:10:35.823 "is_configured": true, 00:10:35.823 "data_offset": 2048, 00:10:35.823 "data_size": 63488 00:10:35.823 }, 00:10:35.823 { 00:10:35.823 "name": "BaseBdev2", 00:10:35.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.823 "is_configured": false, 00:10:35.823 "data_offset": 0, 00:10:35.823 "data_size": 0 00:10:35.823 }, 00:10:35.823 { 00:10:35.823 "name": "BaseBdev3", 00:10:35.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.823 "is_configured": false, 00:10:35.823 "data_offset": 0, 00:10:35.823 "data_size": 0 00:10:35.823 }, 00:10:35.823 { 00:10:35.823 "name": "BaseBdev4", 00:10:35.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.823 "is_configured": false, 00:10:35.823 "data_offset": 0, 00:10:35.823 "data_size": 0 00:10:35.823 } 00:10:35.823 ] 00:10:35.823 }' 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.823 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.082 [2024-11-26 15:26:34.433637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.082 [2024-11-26 15:26:34.433743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.082 [2024-11-26 15:26:34.445685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.082 [2024-11-26 15:26:34.447501] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.082 [2024-11-26 15:26:34.447540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.082 [2024-11-26 15:26:34.447551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.082 [2024-11-26 15:26:34.447574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.082 [2024-11-26 15:26:34.447582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.082 [2024-11-26 15:26:34.447589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.082 "name": "Existed_Raid", 00:10:36.082 "uuid": "52824c7b-5956-4990-983e-fcff5926f691", 00:10:36.082 "strip_size_kb": 0, 00:10:36.082 "state": "configuring", 00:10:36.082 "raid_level": "raid1", 00:10:36.082 "superblock": true, 00:10:36.082 "num_base_bdevs": 4, 00:10:36.082 "num_base_bdevs_discovered": 1, 00:10:36.082 "num_base_bdevs_operational": 4, 00:10:36.082 "base_bdevs_list": [ 00:10:36.082 { 00:10:36.082 "name": "BaseBdev1", 00:10:36.082 "uuid": "557b8f46-6956-4bd6-8cd8-dac5e07adf59", 00:10:36.082 "is_configured": true, 00:10:36.082 "data_offset": 2048, 00:10:36.082 "data_size": 63488 00:10:36.082 }, 00:10:36.082 { 00:10:36.082 "name": "BaseBdev2", 00:10:36.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.082 "is_configured": false, 00:10:36.082 "data_offset": 0, 00:10:36.082 "data_size": 0 00:10:36.082 }, 00:10:36.082 { 00:10:36.082 "name": "BaseBdev3", 00:10:36.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.082 "is_configured": false, 00:10:36.082 "data_offset": 0, 00:10:36.082 "data_size": 0 00:10:36.082 }, 00:10:36.082 { 00:10:36.082 "name": "BaseBdev4", 00:10:36.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.082 "is_configured": false, 00:10:36.082 "data_offset": 0, 00:10:36.082 "data_size": 0 00:10:36.082 } 00:10:36.082 ] 00:10:36.082 }' 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.082 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.652 [2024-11-26 15:26:34.924971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.652 BaseBdev2 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.652 [ 00:10:36.652 { 00:10:36.652 "name": "BaseBdev2", 00:10:36.652 "aliases": [ 00:10:36.652 "311afb59-4316-47eb-b3a8-02bc6420d11b" 00:10:36.652 ], 00:10:36.652 "product_name": "Malloc disk", 00:10:36.652 "block_size": 512, 00:10:36.652 "num_blocks": 65536, 00:10:36.652 "uuid": "311afb59-4316-47eb-b3a8-02bc6420d11b", 00:10:36.652 "assigned_rate_limits": { 00:10:36.652 "rw_ios_per_sec": 0, 00:10:36.652 "rw_mbytes_per_sec": 0, 00:10:36.652 "r_mbytes_per_sec": 0, 00:10:36.652 "w_mbytes_per_sec": 0 00:10:36.652 }, 00:10:36.652 "claimed": true, 00:10:36.652 "claim_type": "exclusive_write", 00:10:36.652 "zoned": false, 00:10:36.652 "supported_io_types": { 00:10:36.652 "read": true, 00:10:36.652 "write": true, 00:10:36.652 "unmap": true, 00:10:36.652 "flush": true, 00:10:36.652 "reset": true, 00:10:36.652 "nvme_admin": false, 00:10:36.652 "nvme_io": false, 00:10:36.652 "nvme_io_md": false, 00:10:36.652 "write_zeroes": true, 00:10:36.652 "zcopy": true, 00:10:36.652 "get_zone_info": false, 00:10:36.652 "zone_management": false, 00:10:36.652 "zone_append": false, 00:10:36.652 "compare": false, 00:10:36.652 "compare_and_write": false, 00:10:36.652 "abort": true, 00:10:36.652 "seek_hole": false, 00:10:36.652 "seek_data": false, 00:10:36.652 "copy": true, 00:10:36.652 "nvme_iov_md": false 00:10:36.652 }, 00:10:36.652 "memory_domains": [ 00:10:36.652 { 00:10:36.652 "dma_device_id": "system", 00:10:36.652 "dma_device_type": 1 00:10:36.652 }, 00:10:36.652 { 00:10:36.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.652 "dma_device_type": 2 00:10:36.652 } 00:10:36.652 ], 00:10:36.652 "driver_specific": {} 00:10:36.652 } 00:10:36.652 ] 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.652 15:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.652 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.652 "name": "Existed_Raid", 00:10:36.652 "uuid": "52824c7b-5956-4990-983e-fcff5926f691", 00:10:36.652 "strip_size_kb": 0, 00:10:36.652 "state": "configuring", 00:10:36.652 "raid_level": "raid1", 00:10:36.652 "superblock": true, 00:10:36.652 "num_base_bdevs": 4, 00:10:36.652 "num_base_bdevs_discovered": 2, 00:10:36.652 "num_base_bdevs_operational": 4, 00:10:36.652 "base_bdevs_list": [ 00:10:36.652 { 00:10:36.652 "name": "BaseBdev1", 00:10:36.652 "uuid": "557b8f46-6956-4bd6-8cd8-dac5e07adf59", 00:10:36.652 "is_configured": true, 00:10:36.652 "data_offset": 2048, 00:10:36.652 "data_size": 63488 00:10:36.652 }, 00:10:36.652 { 00:10:36.652 "name": "BaseBdev2", 00:10:36.652 "uuid": "311afb59-4316-47eb-b3a8-02bc6420d11b", 00:10:36.652 "is_configured": true, 00:10:36.652 "data_offset": 2048, 00:10:36.652 "data_size": 63488 00:10:36.652 }, 00:10:36.652 { 00:10:36.652 "name": "BaseBdev3", 00:10:36.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.652 "is_configured": false, 00:10:36.652 "data_offset": 0, 00:10:36.652 "data_size": 0 00:10:36.652 }, 00:10:36.652 { 00:10:36.652 "name": "BaseBdev4", 00:10:36.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.652 "is_configured": false, 00:10:36.652 "data_offset": 0, 00:10:36.652 "data_size": 0 00:10:36.652 } 00:10:36.652 ] 00:10:36.652 }' 00:10:36.652 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.652 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.912 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.912 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.912 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.173 [2024-11-26 15:26:35.411798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.173 BaseBdev3 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.173 [ 00:10:37.173 { 00:10:37.173 "name": "BaseBdev3", 00:10:37.173 "aliases": [ 00:10:37.173 "a3c69f17-5225-4f75-86dd-49186195de3d" 00:10:37.173 ], 00:10:37.173 "product_name": "Malloc disk", 00:10:37.173 "block_size": 512, 00:10:37.173 "num_blocks": 65536, 00:10:37.173 "uuid": "a3c69f17-5225-4f75-86dd-49186195de3d", 00:10:37.173 "assigned_rate_limits": { 00:10:37.173 "rw_ios_per_sec": 0, 00:10:37.173 "rw_mbytes_per_sec": 0, 00:10:37.173 "r_mbytes_per_sec": 0, 00:10:37.173 "w_mbytes_per_sec": 0 00:10:37.173 }, 00:10:37.173 "claimed": true, 00:10:37.173 "claim_type": "exclusive_write", 00:10:37.173 "zoned": false, 00:10:37.173 "supported_io_types": { 00:10:37.173 "read": true, 00:10:37.173 "write": true, 00:10:37.173 "unmap": true, 00:10:37.173 "flush": true, 00:10:37.173 "reset": true, 00:10:37.173 "nvme_admin": false, 00:10:37.173 "nvme_io": false, 00:10:37.173 "nvme_io_md": false, 00:10:37.173 "write_zeroes": true, 00:10:37.173 "zcopy": true, 00:10:37.173 "get_zone_info": false, 00:10:37.173 "zone_management": false, 00:10:37.173 "zone_append": false, 00:10:37.173 "compare": false, 00:10:37.173 "compare_and_write": false, 00:10:37.173 "abort": true, 00:10:37.173 "seek_hole": false, 00:10:37.173 "seek_data": false, 00:10:37.173 "copy": true, 00:10:37.173 "nvme_iov_md": false 00:10:37.173 }, 00:10:37.173 "memory_domains": [ 00:10:37.173 { 00:10:37.173 "dma_device_id": "system", 00:10:37.173 "dma_device_type": 1 00:10:37.173 }, 00:10:37.173 { 00:10:37.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.173 "dma_device_type": 2 00:10:37.173 } 00:10:37.173 ], 00:10:37.173 "driver_specific": {} 00:10:37.173 } 00:10:37.173 ] 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.173 "name": "Existed_Raid", 00:10:37.173 "uuid": "52824c7b-5956-4990-983e-fcff5926f691", 00:10:37.173 "strip_size_kb": 0, 00:10:37.173 "state": "configuring", 00:10:37.173 "raid_level": "raid1", 00:10:37.173 "superblock": true, 00:10:37.173 "num_base_bdevs": 4, 00:10:37.173 "num_base_bdevs_discovered": 3, 00:10:37.173 "num_base_bdevs_operational": 4, 00:10:37.173 "base_bdevs_list": [ 00:10:37.173 { 00:10:37.173 "name": "BaseBdev1", 00:10:37.173 "uuid": "557b8f46-6956-4bd6-8cd8-dac5e07adf59", 00:10:37.173 "is_configured": true, 00:10:37.173 "data_offset": 2048, 00:10:37.173 "data_size": 63488 00:10:37.173 }, 00:10:37.173 { 00:10:37.173 "name": "BaseBdev2", 00:10:37.173 "uuid": "311afb59-4316-47eb-b3a8-02bc6420d11b", 00:10:37.173 "is_configured": true, 00:10:37.173 "data_offset": 2048, 00:10:37.173 "data_size": 63488 00:10:37.173 }, 00:10:37.173 { 00:10:37.173 "name": "BaseBdev3", 00:10:37.173 "uuid": "a3c69f17-5225-4f75-86dd-49186195de3d", 00:10:37.173 "is_configured": true, 00:10:37.173 "data_offset": 2048, 00:10:37.173 "data_size": 63488 00:10:37.173 }, 00:10:37.173 { 00:10:37.173 "name": "BaseBdev4", 00:10:37.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.173 "is_configured": false, 00:10:37.173 "data_offset": 0, 00:10:37.173 "data_size": 0 00:10:37.173 } 00:10:37.173 ] 00:10:37.173 }' 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.173 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 BaseBdev4 00:10:37.434 [2024-11-26 15:26:35.847248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:37.434 [2024-11-26 15:26:35.847474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:37.434 [2024-11-26 15:26:35.847496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:37.434 [2024-11-26 15:26:35.847805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:37.434 [2024-11-26 15:26:35.847992] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:37.434 [2024-11-26 15:26:35.848011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:37.434 [2024-11-26 15:26:35.848157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 [ 00:10:37.434 { 00:10:37.434 "name": "BaseBdev4", 00:10:37.434 "aliases": [ 00:10:37.434 "92b99fef-be1b-4af1-81a7-e8648ce81a6f" 00:10:37.434 ], 00:10:37.434 "product_name": "Malloc disk", 00:10:37.434 "block_size": 512, 00:10:37.434 "num_blocks": 65536, 00:10:37.434 "uuid": "92b99fef-be1b-4af1-81a7-e8648ce81a6f", 00:10:37.434 "assigned_rate_limits": { 00:10:37.434 "rw_ios_per_sec": 0, 00:10:37.434 "rw_mbytes_per_sec": 0, 00:10:37.434 "r_mbytes_per_sec": 0, 00:10:37.434 "w_mbytes_per_sec": 0 00:10:37.434 }, 00:10:37.434 "claimed": true, 00:10:37.434 "claim_type": "exclusive_write", 00:10:37.434 "zoned": false, 00:10:37.434 "supported_io_types": { 00:10:37.434 "read": true, 00:10:37.434 "write": true, 00:10:37.434 "unmap": true, 00:10:37.434 "flush": true, 00:10:37.434 "reset": true, 00:10:37.434 "nvme_admin": false, 00:10:37.434 "nvme_io": false, 00:10:37.434 "nvme_io_md": false, 00:10:37.434 "write_zeroes": true, 00:10:37.434 "zcopy": true, 00:10:37.434 "get_zone_info": false, 00:10:37.434 "zone_management": false, 00:10:37.434 "zone_append": false, 00:10:37.434 "compare": false, 00:10:37.434 "compare_and_write": false, 00:10:37.434 "abort": true, 00:10:37.434 "seek_hole": false, 00:10:37.434 "seek_data": false, 00:10:37.434 "copy": true, 00:10:37.434 "nvme_iov_md": false 00:10:37.434 }, 00:10:37.434 "memory_domains": [ 00:10:37.434 { 00:10:37.434 "dma_device_id": "system", 00:10:37.434 "dma_device_type": 1 00:10:37.434 }, 00:10:37.434 { 00:10:37.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.434 "dma_device_type": 2 00:10:37.434 } 00:10:37.434 ], 00:10:37.434 "driver_specific": {} 00:10:37.434 } 00:10:37.434 ] 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.434 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.694 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.694 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.694 "name": "Existed_Raid", 00:10:37.694 "uuid": "52824c7b-5956-4990-983e-fcff5926f691", 00:10:37.694 "strip_size_kb": 0, 00:10:37.694 "state": "online", 00:10:37.694 "raid_level": "raid1", 00:10:37.694 "superblock": true, 00:10:37.694 "num_base_bdevs": 4, 00:10:37.694 "num_base_bdevs_discovered": 4, 00:10:37.694 "num_base_bdevs_operational": 4, 00:10:37.694 "base_bdevs_list": [ 00:10:37.694 { 00:10:37.694 "name": "BaseBdev1", 00:10:37.694 "uuid": "557b8f46-6956-4bd6-8cd8-dac5e07adf59", 00:10:37.694 "is_configured": true, 00:10:37.694 "data_offset": 2048, 00:10:37.694 "data_size": 63488 00:10:37.694 }, 00:10:37.694 { 00:10:37.694 "name": "BaseBdev2", 00:10:37.694 "uuid": "311afb59-4316-47eb-b3a8-02bc6420d11b", 00:10:37.694 "is_configured": true, 00:10:37.694 "data_offset": 2048, 00:10:37.694 "data_size": 63488 00:10:37.694 }, 00:10:37.694 { 00:10:37.694 "name": "BaseBdev3", 00:10:37.694 "uuid": "a3c69f17-5225-4f75-86dd-49186195de3d", 00:10:37.694 "is_configured": true, 00:10:37.694 "data_offset": 2048, 00:10:37.694 "data_size": 63488 00:10:37.694 }, 00:10:37.694 { 00:10:37.694 "name": "BaseBdev4", 00:10:37.694 "uuid": "92b99fef-be1b-4af1-81a7-e8648ce81a6f", 00:10:37.694 "is_configured": true, 00:10:37.694 "data_offset": 2048, 00:10:37.694 "data_size": 63488 00:10:37.694 } 00:10:37.694 ] 00:10:37.694 }' 00:10:37.694 15:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.694 15:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.954 [2024-11-26 15:26:36.355748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.954 "name": "Existed_Raid", 00:10:37.954 "aliases": [ 00:10:37.954 "52824c7b-5956-4990-983e-fcff5926f691" 00:10:37.954 ], 00:10:37.954 "product_name": "Raid Volume", 00:10:37.954 "block_size": 512, 00:10:37.954 "num_blocks": 63488, 00:10:37.954 "uuid": "52824c7b-5956-4990-983e-fcff5926f691", 00:10:37.954 "assigned_rate_limits": { 00:10:37.954 "rw_ios_per_sec": 0, 00:10:37.954 "rw_mbytes_per_sec": 0, 00:10:37.954 "r_mbytes_per_sec": 0, 00:10:37.954 "w_mbytes_per_sec": 0 00:10:37.954 }, 00:10:37.954 "claimed": false, 00:10:37.954 "zoned": false, 00:10:37.954 "supported_io_types": { 00:10:37.954 "read": true, 00:10:37.954 "write": true, 00:10:37.954 "unmap": false, 00:10:37.954 "flush": false, 00:10:37.954 "reset": true, 00:10:37.954 "nvme_admin": false, 00:10:37.954 "nvme_io": false, 00:10:37.954 "nvme_io_md": false, 00:10:37.954 "write_zeroes": true, 00:10:37.954 "zcopy": false, 00:10:37.954 "get_zone_info": false, 00:10:37.954 "zone_management": false, 00:10:37.954 "zone_append": false, 00:10:37.954 "compare": false, 00:10:37.954 "compare_and_write": false, 00:10:37.954 "abort": false, 00:10:37.954 "seek_hole": false, 00:10:37.954 "seek_data": false, 00:10:37.954 "copy": false, 00:10:37.954 "nvme_iov_md": false 00:10:37.954 }, 00:10:37.954 "memory_domains": [ 00:10:37.954 { 00:10:37.954 "dma_device_id": "system", 00:10:37.954 "dma_device_type": 1 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.954 "dma_device_type": 2 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "dma_device_id": "system", 00:10:37.954 "dma_device_type": 1 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.954 "dma_device_type": 2 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "dma_device_id": "system", 00:10:37.954 "dma_device_type": 1 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.954 "dma_device_type": 2 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "dma_device_id": "system", 00:10:37.954 "dma_device_type": 1 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.954 "dma_device_type": 2 00:10:37.954 } 00:10:37.954 ], 00:10:37.954 "driver_specific": { 00:10:37.954 "raid": { 00:10:37.954 "uuid": "52824c7b-5956-4990-983e-fcff5926f691", 00:10:37.954 "strip_size_kb": 0, 00:10:37.954 "state": "online", 00:10:37.954 "raid_level": "raid1", 00:10:37.954 "superblock": true, 00:10:37.954 "num_base_bdevs": 4, 00:10:37.954 "num_base_bdevs_discovered": 4, 00:10:37.954 "num_base_bdevs_operational": 4, 00:10:37.954 "base_bdevs_list": [ 00:10:37.954 { 00:10:37.954 "name": "BaseBdev1", 00:10:37.954 "uuid": "557b8f46-6956-4bd6-8cd8-dac5e07adf59", 00:10:37.954 "is_configured": true, 00:10:37.954 "data_offset": 2048, 00:10:37.954 "data_size": 63488 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "name": "BaseBdev2", 00:10:37.954 "uuid": "311afb59-4316-47eb-b3a8-02bc6420d11b", 00:10:37.954 "is_configured": true, 00:10:37.954 "data_offset": 2048, 00:10:37.954 "data_size": 63488 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "name": "BaseBdev3", 00:10:37.954 "uuid": "a3c69f17-5225-4f75-86dd-49186195de3d", 00:10:37.954 "is_configured": true, 00:10:37.954 "data_offset": 2048, 00:10:37.954 "data_size": 63488 00:10:37.954 }, 00:10:37.954 { 00:10:37.954 "name": "BaseBdev4", 00:10:37.954 "uuid": "92b99fef-be1b-4af1-81a7-e8648ce81a6f", 00:10:37.954 "is_configured": true, 00:10:37.954 "data_offset": 2048, 00:10:37.954 "data_size": 63488 00:10:37.954 } 00:10:37.954 ] 00:10:37.954 } 00:10:37.954 } 00:10:37.954 }' 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.954 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:37.954 BaseBdev2 00:10:37.954 BaseBdev3 00:10:37.954 BaseBdev4' 00:10:38.213 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.214 [2024-11-26 15:26:36.647565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.214 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.473 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.473 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.473 "name": "Existed_Raid", 00:10:38.473 "uuid": "52824c7b-5956-4990-983e-fcff5926f691", 00:10:38.473 "strip_size_kb": 0, 00:10:38.473 "state": "online", 00:10:38.473 "raid_level": "raid1", 00:10:38.473 "superblock": true, 00:10:38.473 "num_base_bdevs": 4, 00:10:38.473 "num_base_bdevs_discovered": 3, 00:10:38.473 "num_base_bdevs_operational": 3, 00:10:38.473 "base_bdevs_list": [ 00:10:38.473 { 00:10:38.473 "name": null, 00:10:38.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.473 "is_configured": false, 00:10:38.473 "data_offset": 0, 00:10:38.473 "data_size": 63488 00:10:38.473 }, 00:10:38.473 { 00:10:38.473 "name": "BaseBdev2", 00:10:38.473 "uuid": "311afb59-4316-47eb-b3a8-02bc6420d11b", 00:10:38.473 "is_configured": true, 00:10:38.473 "data_offset": 2048, 00:10:38.473 "data_size": 63488 00:10:38.473 }, 00:10:38.473 { 00:10:38.473 "name": "BaseBdev3", 00:10:38.473 "uuid": "a3c69f17-5225-4f75-86dd-49186195de3d", 00:10:38.473 "is_configured": true, 00:10:38.473 "data_offset": 2048, 00:10:38.473 "data_size": 63488 00:10:38.473 }, 00:10:38.473 { 00:10:38.473 "name": "BaseBdev4", 00:10:38.473 "uuid": "92b99fef-be1b-4af1-81a7-e8648ce81a6f", 00:10:38.473 "is_configured": true, 00:10:38.473 "data_offset": 2048, 00:10:38.473 "data_size": 63488 00:10:38.473 } 00:10:38.473 ] 00:10:38.473 }' 00:10:38.473 15:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.473 15:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.733 [2024-11-26 15:26:37.155099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.733 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.993 [2024-11-26 15:26:37.226534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.993 [2024-11-26 15:26:37.293924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:38.993 [2024-11-26 15:26:37.294039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.993 [2024-11-26 15:26:37.305618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.993 [2024-11-26 15:26:37.305676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.993 [2024-11-26 15:26:37.305687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.993 BaseBdev2 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.993 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.993 [ 00:10:38.993 { 00:10:38.993 "name": "BaseBdev2", 00:10:38.993 "aliases": [ 00:10:38.993 "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a" 00:10:38.993 ], 00:10:38.993 "product_name": "Malloc disk", 00:10:38.993 "block_size": 512, 00:10:38.993 "num_blocks": 65536, 00:10:38.993 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:38.993 "assigned_rate_limits": { 00:10:38.993 "rw_ios_per_sec": 0, 00:10:38.993 "rw_mbytes_per_sec": 0, 00:10:38.993 "r_mbytes_per_sec": 0, 00:10:38.993 "w_mbytes_per_sec": 0 00:10:38.993 }, 00:10:38.993 "claimed": false, 00:10:38.993 "zoned": false, 00:10:38.993 "supported_io_types": { 00:10:38.993 "read": true, 00:10:38.993 "write": true, 00:10:38.993 "unmap": true, 00:10:38.993 "flush": true, 00:10:38.993 "reset": true, 00:10:38.993 "nvme_admin": false, 00:10:38.993 "nvme_io": false, 00:10:38.993 "nvme_io_md": false, 00:10:38.993 "write_zeroes": true, 00:10:38.993 "zcopy": true, 00:10:38.993 "get_zone_info": false, 00:10:38.993 "zone_management": false, 00:10:38.993 "zone_append": false, 00:10:38.993 "compare": false, 00:10:38.993 "compare_and_write": false, 00:10:38.994 "abort": true, 00:10:38.994 "seek_hole": false, 00:10:38.994 "seek_data": false, 00:10:38.994 "copy": true, 00:10:38.994 "nvme_iov_md": false 00:10:38.994 }, 00:10:38.994 "memory_domains": [ 00:10:38.994 { 00:10:38.994 "dma_device_id": "system", 00:10:38.994 "dma_device_type": 1 00:10:38.994 }, 00:10:38.994 { 00:10:38.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.994 "dma_device_type": 2 00:10:38.994 } 00:10:38.994 ], 00:10:38.994 "driver_specific": {} 00:10:38.994 } 00:10:38.994 ] 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.994 BaseBdev3 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.994 [ 00:10:38.994 { 00:10:38.994 "name": "BaseBdev3", 00:10:38.994 "aliases": [ 00:10:38.994 "14c7ae52-9e36-4377-ba22-ea3f4ca67394" 00:10:38.994 ], 00:10:38.994 "product_name": "Malloc disk", 00:10:38.994 "block_size": 512, 00:10:38.994 "num_blocks": 65536, 00:10:38.994 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:38.994 "assigned_rate_limits": { 00:10:38.994 "rw_ios_per_sec": 0, 00:10:38.994 "rw_mbytes_per_sec": 0, 00:10:38.994 "r_mbytes_per_sec": 0, 00:10:38.994 "w_mbytes_per_sec": 0 00:10:38.994 }, 00:10:38.994 "claimed": false, 00:10:38.994 "zoned": false, 00:10:38.994 "supported_io_types": { 00:10:38.994 "read": true, 00:10:38.994 "write": true, 00:10:38.994 "unmap": true, 00:10:38.994 "flush": true, 00:10:38.994 "reset": true, 00:10:38.994 "nvme_admin": false, 00:10:38.994 "nvme_io": false, 00:10:38.994 "nvme_io_md": false, 00:10:38.994 "write_zeroes": true, 00:10:38.994 "zcopy": true, 00:10:38.994 "get_zone_info": false, 00:10:38.994 "zone_management": false, 00:10:38.994 "zone_append": false, 00:10:38.994 "compare": false, 00:10:38.994 "compare_and_write": false, 00:10:38.994 "abort": true, 00:10:38.994 "seek_hole": false, 00:10:38.994 "seek_data": false, 00:10:38.994 "copy": true, 00:10:38.994 "nvme_iov_md": false 00:10:38.994 }, 00:10:38.994 "memory_domains": [ 00:10:38.994 { 00:10:38.994 "dma_device_id": "system", 00:10:38.994 "dma_device_type": 1 00:10:38.994 }, 00:10:38.994 { 00:10:38.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.994 "dma_device_type": 2 00:10:38.994 } 00:10:38.994 ], 00:10:38.994 "driver_specific": {} 00:10:38.994 } 00:10:38.994 ] 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.994 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.253 BaseBdev4 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.253 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.253 [ 00:10:39.253 { 00:10:39.253 "name": "BaseBdev4", 00:10:39.253 "aliases": [ 00:10:39.253 "036b65f8-822c-4e04-9c71-054b96961ede" 00:10:39.253 ], 00:10:39.253 "product_name": "Malloc disk", 00:10:39.253 "block_size": 512, 00:10:39.253 "num_blocks": 65536, 00:10:39.253 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:39.253 "assigned_rate_limits": { 00:10:39.253 "rw_ios_per_sec": 0, 00:10:39.253 "rw_mbytes_per_sec": 0, 00:10:39.253 "r_mbytes_per_sec": 0, 00:10:39.253 "w_mbytes_per_sec": 0 00:10:39.253 }, 00:10:39.253 "claimed": false, 00:10:39.253 "zoned": false, 00:10:39.253 "supported_io_types": { 00:10:39.254 "read": true, 00:10:39.254 "write": true, 00:10:39.254 "unmap": true, 00:10:39.254 "flush": true, 00:10:39.254 "reset": true, 00:10:39.254 "nvme_admin": false, 00:10:39.254 "nvme_io": false, 00:10:39.254 "nvme_io_md": false, 00:10:39.254 "write_zeroes": true, 00:10:39.254 "zcopy": true, 00:10:39.254 "get_zone_info": false, 00:10:39.254 "zone_management": false, 00:10:39.254 "zone_append": false, 00:10:39.254 "compare": false, 00:10:39.254 "compare_and_write": false, 00:10:39.254 "abort": true, 00:10:39.254 "seek_hole": false, 00:10:39.254 "seek_data": false, 00:10:39.254 "copy": true, 00:10:39.254 "nvme_iov_md": false 00:10:39.254 }, 00:10:39.254 "memory_domains": [ 00:10:39.254 { 00:10:39.254 "dma_device_id": "system", 00:10:39.254 "dma_device_type": 1 00:10:39.254 }, 00:10:39.254 { 00:10:39.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.254 "dma_device_type": 2 00:10:39.254 } 00:10:39.254 ], 00:10:39.254 "driver_specific": {} 00:10:39.254 } 00:10:39.254 ] 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.254 [2024-11-26 15:26:37.524047] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.254 [2024-11-26 15:26:37.524147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.254 [2024-11-26 15:26:37.524201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.254 [2024-11-26 15:26:37.526078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.254 [2024-11-26 15:26:37.526168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.254 "name": "Existed_Raid", 00:10:39.254 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:39.254 "strip_size_kb": 0, 00:10:39.254 "state": "configuring", 00:10:39.254 "raid_level": "raid1", 00:10:39.254 "superblock": true, 00:10:39.254 "num_base_bdevs": 4, 00:10:39.254 "num_base_bdevs_discovered": 3, 00:10:39.254 "num_base_bdevs_operational": 4, 00:10:39.254 "base_bdevs_list": [ 00:10:39.254 { 00:10:39.254 "name": "BaseBdev1", 00:10:39.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.254 "is_configured": false, 00:10:39.254 "data_offset": 0, 00:10:39.254 "data_size": 0 00:10:39.254 }, 00:10:39.254 { 00:10:39.254 "name": "BaseBdev2", 00:10:39.254 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:39.254 "is_configured": true, 00:10:39.254 "data_offset": 2048, 00:10:39.254 "data_size": 63488 00:10:39.254 }, 00:10:39.254 { 00:10:39.254 "name": "BaseBdev3", 00:10:39.254 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:39.254 "is_configured": true, 00:10:39.254 "data_offset": 2048, 00:10:39.254 "data_size": 63488 00:10:39.254 }, 00:10:39.254 { 00:10:39.254 "name": "BaseBdev4", 00:10:39.254 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:39.254 "is_configured": true, 00:10:39.254 "data_offset": 2048, 00:10:39.254 "data_size": 63488 00:10:39.254 } 00:10:39.254 ] 00:10:39.254 }' 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.254 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.512 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:39.512 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.512 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.512 [2024-11-26 15:26:37.980172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.512 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.771 15:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.771 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.771 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.771 "name": "Existed_Raid", 00:10:39.771 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:39.771 "strip_size_kb": 0, 00:10:39.771 "state": "configuring", 00:10:39.771 "raid_level": "raid1", 00:10:39.771 "superblock": true, 00:10:39.771 "num_base_bdevs": 4, 00:10:39.771 "num_base_bdevs_discovered": 2, 00:10:39.771 "num_base_bdevs_operational": 4, 00:10:39.771 "base_bdevs_list": [ 00:10:39.771 { 00:10:39.771 "name": "BaseBdev1", 00:10:39.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.771 "is_configured": false, 00:10:39.771 "data_offset": 0, 00:10:39.771 "data_size": 0 00:10:39.771 }, 00:10:39.771 { 00:10:39.771 "name": null, 00:10:39.771 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:39.771 "is_configured": false, 00:10:39.771 "data_offset": 0, 00:10:39.771 "data_size": 63488 00:10:39.771 }, 00:10:39.771 { 00:10:39.771 "name": "BaseBdev3", 00:10:39.771 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:39.771 "is_configured": true, 00:10:39.771 "data_offset": 2048, 00:10:39.771 "data_size": 63488 00:10:39.771 }, 00:10:39.771 { 00:10:39.771 "name": "BaseBdev4", 00:10:39.771 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:39.771 "is_configured": true, 00:10:39.771 "data_offset": 2048, 00:10:39.771 "data_size": 63488 00:10:39.771 } 00:10:39.771 ] 00:10:39.771 }' 00:10:39.771 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.771 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.031 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.031 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.031 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.031 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.031 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.290 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.291 [2024-11-26 15:26:38.523654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.291 BaseBdev1 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.291 [ 00:10:40.291 { 00:10:40.291 "name": "BaseBdev1", 00:10:40.291 "aliases": [ 00:10:40.291 "6c8c86b3-f06a-4528-8b30-c9caed31664e" 00:10:40.291 ], 00:10:40.291 "product_name": "Malloc disk", 00:10:40.291 "block_size": 512, 00:10:40.291 "num_blocks": 65536, 00:10:40.291 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:40.291 "assigned_rate_limits": { 00:10:40.291 "rw_ios_per_sec": 0, 00:10:40.291 "rw_mbytes_per_sec": 0, 00:10:40.291 "r_mbytes_per_sec": 0, 00:10:40.291 "w_mbytes_per_sec": 0 00:10:40.291 }, 00:10:40.291 "claimed": true, 00:10:40.291 "claim_type": "exclusive_write", 00:10:40.291 "zoned": false, 00:10:40.291 "supported_io_types": { 00:10:40.291 "read": true, 00:10:40.291 "write": true, 00:10:40.291 "unmap": true, 00:10:40.291 "flush": true, 00:10:40.291 "reset": true, 00:10:40.291 "nvme_admin": false, 00:10:40.291 "nvme_io": false, 00:10:40.291 "nvme_io_md": false, 00:10:40.291 "write_zeroes": true, 00:10:40.291 "zcopy": true, 00:10:40.291 "get_zone_info": false, 00:10:40.291 "zone_management": false, 00:10:40.291 "zone_append": false, 00:10:40.291 "compare": false, 00:10:40.291 "compare_and_write": false, 00:10:40.291 "abort": true, 00:10:40.291 "seek_hole": false, 00:10:40.291 "seek_data": false, 00:10:40.291 "copy": true, 00:10:40.291 "nvme_iov_md": false 00:10:40.291 }, 00:10:40.291 "memory_domains": [ 00:10:40.291 { 00:10:40.291 "dma_device_id": "system", 00:10:40.291 "dma_device_type": 1 00:10:40.291 }, 00:10:40.291 { 00:10:40.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.291 "dma_device_type": 2 00:10:40.291 } 00:10:40.291 ], 00:10:40.291 "driver_specific": {} 00:10:40.291 } 00:10:40.291 ] 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.291 "name": "Existed_Raid", 00:10:40.291 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:40.291 "strip_size_kb": 0, 00:10:40.291 "state": "configuring", 00:10:40.291 "raid_level": "raid1", 00:10:40.291 "superblock": true, 00:10:40.291 "num_base_bdevs": 4, 00:10:40.291 "num_base_bdevs_discovered": 3, 00:10:40.291 "num_base_bdevs_operational": 4, 00:10:40.291 "base_bdevs_list": [ 00:10:40.291 { 00:10:40.291 "name": "BaseBdev1", 00:10:40.291 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:40.291 "is_configured": true, 00:10:40.291 "data_offset": 2048, 00:10:40.291 "data_size": 63488 00:10:40.291 }, 00:10:40.291 { 00:10:40.291 "name": null, 00:10:40.291 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:40.291 "is_configured": false, 00:10:40.291 "data_offset": 0, 00:10:40.291 "data_size": 63488 00:10:40.291 }, 00:10:40.291 { 00:10:40.291 "name": "BaseBdev3", 00:10:40.291 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:40.291 "is_configured": true, 00:10:40.291 "data_offset": 2048, 00:10:40.291 "data_size": 63488 00:10:40.291 }, 00:10:40.291 { 00:10:40.291 "name": "BaseBdev4", 00:10:40.291 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:40.291 "is_configured": true, 00:10:40.291 "data_offset": 2048, 00:10:40.291 "data_size": 63488 00:10:40.291 } 00:10:40.291 ] 00:10:40.291 }' 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.291 15:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.550 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.550 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.550 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.550 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.550 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.809 [2024-11-26 15:26:39.055867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.809 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.809 "name": "Existed_Raid", 00:10:40.809 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:40.809 "strip_size_kb": 0, 00:10:40.809 "state": "configuring", 00:10:40.809 "raid_level": "raid1", 00:10:40.809 "superblock": true, 00:10:40.809 "num_base_bdevs": 4, 00:10:40.809 "num_base_bdevs_discovered": 2, 00:10:40.809 "num_base_bdevs_operational": 4, 00:10:40.809 "base_bdevs_list": [ 00:10:40.809 { 00:10:40.809 "name": "BaseBdev1", 00:10:40.809 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:40.809 "is_configured": true, 00:10:40.809 "data_offset": 2048, 00:10:40.809 "data_size": 63488 00:10:40.809 }, 00:10:40.809 { 00:10:40.809 "name": null, 00:10:40.809 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:40.809 "is_configured": false, 00:10:40.809 "data_offset": 0, 00:10:40.809 "data_size": 63488 00:10:40.810 }, 00:10:40.810 { 00:10:40.810 "name": null, 00:10:40.810 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:40.810 "is_configured": false, 00:10:40.810 "data_offset": 0, 00:10:40.810 "data_size": 63488 00:10:40.810 }, 00:10:40.810 { 00:10:40.810 "name": "BaseBdev4", 00:10:40.810 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:40.810 "is_configured": true, 00:10:40.810 "data_offset": 2048, 00:10:40.810 "data_size": 63488 00:10:40.810 } 00:10:40.810 ] 00:10:40.810 }' 00:10:40.810 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.810 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.068 [2024-11-26 15:26:39.520041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.068 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.327 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.327 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.327 "name": "Existed_Raid", 00:10:41.327 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:41.327 "strip_size_kb": 0, 00:10:41.327 "state": "configuring", 00:10:41.327 "raid_level": "raid1", 00:10:41.327 "superblock": true, 00:10:41.327 "num_base_bdevs": 4, 00:10:41.327 "num_base_bdevs_discovered": 3, 00:10:41.327 "num_base_bdevs_operational": 4, 00:10:41.327 "base_bdevs_list": [ 00:10:41.327 { 00:10:41.327 "name": "BaseBdev1", 00:10:41.327 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:41.327 "is_configured": true, 00:10:41.327 "data_offset": 2048, 00:10:41.327 "data_size": 63488 00:10:41.327 }, 00:10:41.327 { 00:10:41.327 "name": null, 00:10:41.327 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:41.327 "is_configured": false, 00:10:41.327 "data_offset": 0, 00:10:41.327 "data_size": 63488 00:10:41.327 }, 00:10:41.327 { 00:10:41.327 "name": "BaseBdev3", 00:10:41.327 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:41.327 "is_configured": true, 00:10:41.327 "data_offset": 2048, 00:10:41.327 "data_size": 63488 00:10:41.327 }, 00:10:41.327 { 00:10:41.327 "name": "BaseBdev4", 00:10:41.327 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:41.327 "is_configured": true, 00:10:41.327 "data_offset": 2048, 00:10:41.327 "data_size": 63488 00:10:41.327 } 00:10:41.327 ] 00:10:41.327 }' 00:10:41.327 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.327 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.586 15:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.586 [2024-11-26 15:26:39.996188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.586 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.587 "name": "Existed_Raid", 00:10:41.587 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:41.587 "strip_size_kb": 0, 00:10:41.587 "state": "configuring", 00:10:41.587 "raid_level": "raid1", 00:10:41.587 "superblock": true, 00:10:41.587 "num_base_bdevs": 4, 00:10:41.587 "num_base_bdevs_discovered": 2, 00:10:41.587 "num_base_bdevs_operational": 4, 00:10:41.587 "base_bdevs_list": [ 00:10:41.587 { 00:10:41.587 "name": null, 00:10:41.587 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:41.587 "is_configured": false, 00:10:41.587 "data_offset": 0, 00:10:41.587 "data_size": 63488 00:10:41.587 }, 00:10:41.587 { 00:10:41.587 "name": null, 00:10:41.587 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:41.587 "is_configured": false, 00:10:41.587 "data_offset": 0, 00:10:41.587 "data_size": 63488 00:10:41.587 }, 00:10:41.587 { 00:10:41.587 "name": "BaseBdev3", 00:10:41.587 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:41.587 "is_configured": true, 00:10:41.587 "data_offset": 2048, 00:10:41.587 "data_size": 63488 00:10:41.587 }, 00:10:41.587 { 00:10:41.587 "name": "BaseBdev4", 00:10:41.587 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:41.587 "is_configured": true, 00:10:41.587 "data_offset": 2048, 00:10:41.587 "data_size": 63488 00:10:41.587 } 00:10:41.587 ] 00:10:41.587 }' 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.587 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.168 [2024-11-26 15:26:40.538822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.168 "name": "Existed_Raid", 00:10:42.168 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:42.168 "strip_size_kb": 0, 00:10:42.168 "state": "configuring", 00:10:42.168 "raid_level": "raid1", 00:10:42.168 "superblock": true, 00:10:42.168 "num_base_bdevs": 4, 00:10:42.168 "num_base_bdevs_discovered": 3, 00:10:42.168 "num_base_bdevs_operational": 4, 00:10:42.168 "base_bdevs_list": [ 00:10:42.168 { 00:10:42.168 "name": null, 00:10:42.168 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:42.168 "is_configured": false, 00:10:42.168 "data_offset": 0, 00:10:42.168 "data_size": 63488 00:10:42.168 }, 00:10:42.168 { 00:10:42.168 "name": "BaseBdev2", 00:10:42.168 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:42.168 "is_configured": true, 00:10:42.168 "data_offset": 2048, 00:10:42.168 "data_size": 63488 00:10:42.168 }, 00:10:42.168 { 00:10:42.168 "name": "BaseBdev3", 00:10:42.168 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:42.168 "is_configured": true, 00:10:42.168 "data_offset": 2048, 00:10:42.168 "data_size": 63488 00:10:42.168 }, 00:10:42.168 { 00:10:42.168 "name": "BaseBdev4", 00:10:42.168 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:42.168 "is_configured": true, 00:10:42.168 "data_offset": 2048, 00:10:42.168 "data_size": 63488 00:10:42.168 } 00:10:42.168 ] 00:10:42.168 }' 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.168 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.783 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.783 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.783 15:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.783 15:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6c8c86b3-f06a-4528-8b30-c9caed31664e 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.783 [2024-11-26 15:26:41.106122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:42.783 [2024-11-26 15:26:41.106315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.783 [2024-11-26 15:26:41.106329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:42.783 [2024-11-26 15:26:41.106608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:42.783 NewBaseBdev 00:10:42.783 [2024-11-26 15:26:41.106734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.783 [2024-11-26 15:26:41.106752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:42.783 [2024-11-26 15:26:41.106853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.783 [ 00:10:42.783 { 00:10:42.783 "name": "NewBaseBdev", 00:10:42.783 "aliases": [ 00:10:42.783 "6c8c86b3-f06a-4528-8b30-c9caed31664e" 00:10:42.783 ], 00:10:42.783 "product_name": "Malloc disk", 00:10:42.783 "block_size": 512, 00:10:42.783 "num_blocks": 65536, 00:10:42.783 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:42.783 "assigned_rate_limits": { 00:10:42.783 "rw_ios_per_sec": 0, 00:10:42.783 "rw_mbytes_per_sec": 0, 00:10:42.783 "r_mbytes_per_sec": 0, 00:10:42.783 "w_mbytes_per_sec": 0 00:10:42.783 }, 00:10:42.783 "claimed": true, 00:10:42.783 "claim_type": "exclusive_write", 00:10:42.783 "zoned": false, 00:10:42.783 "supported_io_types": { 00:10:42.783 "read": true, 00:10:42.783 "write": true, 00:10:42.783 "unmap": true, 00:10:42.783 "flush": true, 00:10:42.783 "reset": true, 00:10:42.783 "nvme_admin": false, 00:10:42.783 "nvme_io": false, 00:10:42.783 "nvme_io_md": false, 00:10:42.783 "write_zeroes": true, 00:10:42.783 "zcopy": true, 00:10:42.783 "get_zone_info": false, 00:10:42.783 "zone_management": false, 00:10:42.783 "zone_append": false, 00:10:42.783 "compare": false, 00:10:42.783 "compare_and_write": false, 00:10:42.783 "abort": true, 00:10:42.783 "seek_hole": false, 00:10:42.783 "seek_data": false, 00:10:42.783 "copy": true, 00:10:42.783 "nvme_iov_md": false 00:10:42.783 }, 00:10:42.783 "memory_domains": [ 00:10:42.783 { 00:10:42.783 "dma_device_id": "system", 00:10:42.783 "dma_device_type": 1 00:10:42.783 }, 00:10:42.783 { 00:10:42.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.783 "dma_device_type": 2 00:10:42.783 } 00:10:42.783 ], 00:10:42.783 "driver_specific": {} 00:10:42.783 } 00:10:42.783 ] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.783 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.784 "name": "Existed_Raid", 00:10:42.784 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:42.784 "strip_size_kb": 0, 00:10:42.784 "state": "online", 00:10:42.784 "raid_level": "raid1", 00:10:42.784 "superblock": true, 00:10:42.784 "num_base_bdevs": 4, 00:10:42.784 "num_base_bdevs_discovered": 4, 00:10:42.784 "num_base_bdevs_operational": 4, 00:10:42.784 "base_bdevs_list": [ 00:10:42.784 { 00:10:42.784 "name": "NewBaseBdev", 00:10:42.784 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:42.784 "is_configured": true, 00:10:42.784 "data_offset": 2048, 00:10:42.784 "data_size": 63488 00:10:42.784 }, 00:10:42.784 { 00:10:42.784 "name": "BaseBdev2", 00:10:42.784 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:42.784 "is_configured": true, 00:10:42.784 "data_offset": 2048, 00:10:42.784 "data_size": 63488 00:10:42.784 }, 00:10:42.784 { 00:10:42.784 "name": "BaseBdev3", 00:10:42.784 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:42.784 "is_configured": true, 00:10:42.784 "data_offset": 2048, 00:10:42.784 "data_size": 63488 00:10:42.784 }, 00:10:42.784 { 00:10:42.784 "name": "BaseBdev4", 00:10:42.784 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:42.784 "is_configured": true, 00:10:42.784 "data_offset": 2048, 00:10:42.784 "data_size": 63488 00:10:42.784 } 00:10:42.784 ] 00:10:42.784 }' 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.784 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.370 [2024-11-26 15:26:41.558654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.370 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.370 "name": "Existed_Raid", 00:10:43.370 "aliases": [ 00:10:43.370 "b3f200f3-a8d4-404b-b134-6773a7161e18" 00:10:43.370 ], 00:10:43.370 "product_name": "Raid Volume", 00:10:43.370 "block_size": 512, 00:10:43.370 "num_blocks": 63488, 00:10:43.370 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:43.370 "assigned_rate_limits": { 00:10:43.370 "rw_ios_per_sec": 0, 00:10:43.370 "rw_mbytes_per_sec": 0, 00:10:43.370 "r_mbytes_per_sec": 0, 00:10:43.370 "w_mbytes_per_sec": 0 00:10:43.370 }, 00:10:43.370 "claimed": false, 00:10:43.370 "zoned": false, 00:10:43.370 "supported_io_types": { 00:10:43.370 "read": true, 00:10:43.370 "write": true, 00:10:43.370 "unmap": false, 00:10:43.370 "flush": false, 00:10:43.370 "reset": true, 00:10:43.370 "nvme_admin": false, 00:10:43.370 "nvme_io": false, 00:10:43.370 "nvme_io_md": false, 00:10:43.370 "write_zeroes": true, 00:10:43.370 "zcopy": false, 00:10:43.370 "get_zone_info": false, 00:10:43.370 "zone_management": false, 00:10:43.370 "zone_append": false, 00:10:43.370 "compare": false, 00:10:43.370 "compare_and_write": false, 00:10:43.370 "abort": false, 00:10:43.370 "seek_hole": false, 00:10:43.370 "seek_data": false, 00:10:43.370 "copy": false, 00:10:43.370 "nvme_iov_md": false 00:10:43.370 }, 00:10:43.370 "memory_domains": [ 00:10:43.370 { 00:10:43.370 "dma_device_id": "system", 00:10:43.370 "dma_device_type": 1 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.370 "dma_device_type": 2 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "system", 00:10:43.370 "dma_device_type": 1 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.370 "dma_device_type": 2 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "system", 00:10:43.370 "dma_device_type": 1 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.370 "dma_device_type": 2 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "system", 00:10:43.370 "dma_device_type": 1 00:10:43.370 }, 00:10:43.370 { 00:10:43.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.370 "dma_device_type": 2 00:10:43.370 } 00:10:43.370 ], 00:10:43.370 "driver_specific": { 00:10:43.370 "raid": { 00:10:43.370 "uuid": "b3f200f3-a8d4-404b-b134-6773a7161e18", 00:10:43.370 "strip_size_kb": 0, 00:10:43.370 "state": "online", 00:10:43.370 "raid_level": "raid1", 00:10:43.371 "superblock": true, 00:10:43.371 "num_base_bdevs": 4, 00:10:43.371 "num_base_bdevs_discovered": 4, 00:10:43.371 "num_base_bdevs_operational": 4, 00:10:43.371 "base_bdevs_list": [ 00:10:43.371 { 00:10:43.371 "name": "NewBaseBdev", 00:10:43.371 "uuid": "6c8c86b3-f06a-4528-8b30-c9caed31664e", 00:10:43.371 "is_configured": true, 00:10:43.371 "data_offset": 2048, 00:10:43.371 "data_size": 63488 00:10:43.371 }, 00:10:43.371 { 00:10:43.371 "name": "BaseBdev2", 00:10:43.371 "uuid": "2fbbf03c-6e00-4ce9-a12a-d1b413a1db9a", 00:10:43.371 "is_configured": true, 00:10:43.371 "data_offset": 2048, 00:10:43.371 "data_size": 63488 00:10:43.371 }, 00:10:43.371 { 00:10:43.371 "name": "BaseBdev3", 00:10:43.371 "uuid": "14c7ae52-9e36-4377-ba22-ea3f4ca67394", 00:10:43.371 "is_configured": true, 00:10:43.371 "data_offset": 2048, 00:10:43.371 "data_size": 63488 00:10:43.371 }, 00:10:43.371 { 00:10:43.371 "name": "BaseBdev4", 00:10:43.371 "uuid": "036b65f8-822c-4e04-9c71-054b96961ede", 00:10:43.371 "is_configured": true, 00:10:43.371 "data_offset": 2048, 00:10:43.371 "data_size": 63488 00:10:43.371 } 00:10:43.371 ] 00:10:43.371 } 00:10:43.371 } 00:10:43.371 }' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:43.371 BaseBdev2 00:10:43.371 BaseBdev3 00:10:43.371 BaseBdev4' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.371 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 [2024-11-26 15:26:41.878417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.631 [2024-11-26 15:26:41.878447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.631 [2024-11-26 15:26:41.878527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.631 [2024-11-26 15:26:41.878802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.631 [2024-11-26 15:26:41.878812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86182 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86182 ']' 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 86182 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86182 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86182' 00:10:43.631 killing process with pid 86182 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 86182 00:10:43.631 [2024-11-26 15:26:41.928514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.631 15:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 86182 00:10:43.631 [2024-11-26 15:26:41.969028] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.891 15:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:43.891 00:10:43.891 real 0m9.504s 00:10:43.891 user 0m16.260s 00:10:43.891 sys 0m1.933s 00:10:43.891 15:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.891 15:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.891 ************************************ 00:10:43.891 END TEST raid_state_function_test_sb 00:10:43.891 ************************************ 00:10:43.891 15:26:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:43.891 15:26:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:43.891 15:26:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.891 15:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.891 ************************************ 00:10:43.891 START TEST raid_superblock_test 00:10:43.891 ************************************ 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=86830 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 86830 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 86830 ']' 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.891 15:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.891 [2024-11-26 15:26:42.340035] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:43.891 [2024-11-26 15:26:42.340710] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86830 ] 00:10:44.151 [2024-11-26 15:26:42.475390] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:44.151 [2024-11-26 15:26:42.512991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.151 [2024-11-26 15:26:42.538756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.151 [2024-11-26 15:26:42.581770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.151 [2024-11-26 15:26:42.581808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.721 malloc1 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.721 [2024-11-26 15:26:43.189610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.721 [2024-11-26 15:26:43.189740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.721 [2024-11-26 15:26:43.189788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:44.721 [2024-11-26 15:26:43.189823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.721 [2024-11-26 15:26:43.191974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.721 [2024-11-26 15:26:43.192039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.721 pt1 00:10:44.721 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.981 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.982 malloc2 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.982 [2024-11-26 15:26:43.222293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.982 [2024-11-26 15:26:43.222386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.982 [2024-11-26 15:26:43.222425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:44.982 [2024-11-26 15:26:43.222434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.982 [2024-11-26 15:26:43.224521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.982 [2024-11-26 15:26:43.224555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.982 pt2 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.982 malloc3 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.982 [2024-11-26 15:26:43.250953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.982 [2024-11-26 15:26:43.251044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.982 [2024-11-26 15:26:43.251098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:44.982 [2024-11-26 15:26:43.251127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.982 [2024-11-26 15:26:43.253250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.982 [2024-11-26 15:26:43.253318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.982 pt3 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.982 malloc4 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.982 [2024-11-26 15:26:43.291624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:44.982 [2024-11-26 15:26:43.291715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.982 [2024-11-26 15:26:43.291772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:44.982 [2024-11-26 15:26:43.291799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.982 [2024-11-26 15:26:43.293885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.982 [2024-11-26 15:26:43.293955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:44.982 pt4 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.982 [2024-11-26 15:26:43.303673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.982 [2024-11-26 15:26:43.305511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.982 [2024-11-26 15:26:43.305585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:44.982 [2024-11-26 15:26:43.305625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:44.982 [2024-11-26 15:26:43.305784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:44.982 [2024-11-26 15:26:43.305796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:44.982 [2024-11-26 15:26:43.306065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:44.982 [2024-11-26 15:26:43.306244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:44.982 [2024-11-26 15:26:43.306258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:44.982 [2024-11-26 15:26:43.306385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.982 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.983 "name": "raid_bdev1", 00:10:44.983 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:44.983 "strip_size_kb": 0, 00:10:44.983 "state": "online", 00:10:44.983 "raid_level": "raid1", 00:10:44.983 "superblock": true, 00:10:44.983 "num_base_bdevs": 4, 00:10:44.983 "num_base_bdevs_discovered": 4, 00:10:44.983 "num_base_bdevs_operational": 4, 00:10:44.983 "base_bdevs_list": [ 00:10:44.983 { 00:10:44.983 "name": "pt1", 00:10:44.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.983 "is_configured": true, 00:10:44.983 "data_offset": 2048, 00:10:44.983 "data_size": 63488 00:10:44.983 }, 00:10:44.983 { 00:10:44.983 "name": "pt2", 00:10:44.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.983 "is_configured": true, 00:10:44.983 "data_offset": 2048, 00:10:44.983 "data_size": 63488 00:10:44.983 }, 00:10:44.983 { 00:10:44.983 "name": "pt3", 00:10:44.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.983 "is_configured": true, 00:10:44.983 "data_offset": 2048, 00:10:44.983 "data_size": 63488 00:10:44.983 }, 00:10:44.983 { 00:10:44.983 "name": "pt4", 00:10:44.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.983 "is_configured": true, 00:10:44.983 "data_offset": 2048, 00:10:44.983 "data_size": 63488 00:10:44.983 } 00:10:44.983 ] 00:10:44.983 }' 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.983 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.243 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:45.243 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:45.243 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.243 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.243 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.243 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.243 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.504 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.504 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.504 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.504 [2024-11-26 15:26:43.724134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.504 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.504 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.504 "name": "raid_bdev1", 00:10:45.504 "aliases": [ 00:10:45.504 "ab221e89-2bf6-4f2b-8877-6f1376be5c49" 00:10:45.504 ], 00:10:45.504 "product_name": "Raid Volume", 00:10:45.504 "block_size": 512, 00:10:45.504 "num_blocks": 63488, 00:10:45.504 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:45.504 "assigned_rate_limits": { 00:10:45.504 "rw_ios_per_sec": 0, 00:10:45.504 "rw_mbytes_per_sec": 0, 00:10:45.504 "r_mbytes_per_sec": 0, 00:10:45.504 "w_mbytes_per_sec": 0 00:10:45.504 }, 00:10:45.504 "claimed": false, 00:10:45.504 "zoned": false, 00:10:45.504 "supported_io_types": { 00:10:45.504 "read": true, 00:10:45.504 "write": true, 00:10:45.504 "unmap": false, 00:10:45.504 "flush": false, 00:10:45.504 "reset": true, 00:10:45.504 "nvme_admin": false, 00:10:45.504 "nvme_io": false, 00:10:45.504 "nvme_io_md": false, 00:10:45.504 "write_zeroes": true, 00:10:45.504 "zcopy": false, 00:10:45.504 "get_zone_info": false, 00:10:45.504 "zone_management": false, 00:10:45.504 "zone_append": false, 00:10:45.504 "compare": false, 00:10:45.504 "compare_and_write": false, 00:10:45.504 "abort": false, 00:10:45.504 "seek_hole": false, 00:10:45.504 "seek_data": false, 00:10:45.504 "copy": false, 00:10:45.504 "nvme_iov_md": false 00:10:45.504 }, 00:10:45.504 "memory_domains": [ 00:10:45.504 { 00:10:45.504 "dma_device_id": "system", 00:10:45.504 "dma_device_type": 1 00:10:45.504 }, 00:10:45.504 { 00:10:45.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.504 "dma_device_type": 2 00:10:45.504 }, 00:10:45.504 { 00:10:45.504 "dma_device_id": "system", 00:10:45.504 "dma_device_type": 1 00:10:45.504 }, 00:10:45.504 { 00:10:45.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.504 "dma_device_type": 2 00:10:45.504 }, 00:10:45.504 { 00:10:45.504 "dma_device_id": "system", 00:10:45.504 "dma_device_type": 1 00:10:45.504 }, 00:10:45.504 { 00:10:45.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.504 "dma_device_type": 2 00:10:45.504 }, 00:10:45.504 { 00:10:45.504 "dma_device_id": "system", 00:10:45.504 "dma_device_type": 1 00:10:45.504 }, 00:10:45.504 { 00:10:45.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.504 "dma_device_type": 2 00:10:45.504 } 00:10:45.504 ], 00:10:45.504 "driver_specific": { 00:10:45.504 "raid": { 00:10:45.504 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:45.504 "strip_size_kb": 0, 00:10:45.504 "state": "online", 00:10:45.504 "raid_level": "raid1", 00:10:45.504 "superblock": true, 00:10:45.504 "num_base_bdevs": 4, 00:10:45.504 "num_base_bdevs_discovered": 4, 00:10:45.504 "num_base_bdevs_operational": 4, 00:10:45.504 "base_bdevs_list": [ 00:10:45.505 { 00:10:45.505 "name": "pt1", 00:10:45.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.505 "is_configured": true, 00:10:45.505 "data_offset": 2048, 00:10:45.505 "data_size": 63488 00:10:45.505 }, 00:10:45.505 { 00:10:45.505 "name": "pt2", 00:10:45.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.505 "is_configured": true, 00:10:45.505 "data_offset": 2048, 00:10:45.505 "data_size": 63488 00:10:45.505 }, 00:10:45.505 { 00:10:45.505 "name": "pt3", 00:10:45.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.505 "is_configured": true, 00:10:45.505 "data_offset": 2048, 00:10:45.505 "data_size": 63488 00:10:45.505 }, 00:10:45.505 { 00:10:45.505 "name": "pt4", 00:10:45.505 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.505 "is_configured": true, 00:10:45.505 "data_offset": 2048, 00:10:45.505 "data_size": 63488 00:10:45.505 } 00:10:45.505 ] 00:10:45.505 } 00:10:45.505 } 00:10:45.505 }' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:45.505 pt2 00:10:45.505 pt3 00:10:45.505 pt4' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.505 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.765 15:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:45.765 [2024-11-26 15:26:44.080231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab221e89-2bf6-4f2b-8877-6f1376be5c49 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab221e89-2bf6-4f2b-8877-6f1376be5c49 ']' 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.765 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.766 [2024-11-26 15:26:44.123888] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.766 [2024-11-26 15:26:44.123918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.766 [2024-11-26 15:26:44.124015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.766 [2024-11-26 15:26:44.124114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.766 [2024-11-26 15:26:44.124128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.766 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.026 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.026 [2024-11-26 15:26:44.279970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:46.026 [2024-11-26 15:26:44.281822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:46.026 [2024-11-26 15:26:44.281864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:46.026 [2024-11-26 15:26:44.281894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:46.026 [2024-11-26 15:26:44.281938] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:46.027 [2024-11-26 15:26:44.281988] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:46.027 [2024-11-26 15:26:44.282005] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:46.027 [2024-11-26 15:26:44.282022] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:46.027 [2024-11-26 15:26:44.282035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.027 [2024-11-26 15:26:44.282045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:46.027 request: 00:10:46.027 { 00:10:46.027 "name": "raid_bdev1", 00:10:46.027 "raid_level": "raid1", 00:10:46.027 "base_bdevs": [ 00:10:46.027 "malloc1", 00:10:46.027 "malloc2", 00:10:46.027 "malloc3", 00:10:46.027 "malloc4" 00:10:46.027 ], 00:10:46.027 "superblock": false, 00:10:46.027 "method": "bdev_raid_create", 00:10:46.027 "req_id": 1 00:10:46.027 } 00:10:46.027 Got JSON-RPC error response 00:10:46.027 response: 00:10:46.027 { 00:10:46.027 "code": -17, 00:10:46.027 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:46.027 } 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.027 [2024-11-26 15:26:44.347950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:46.027 [2024-11-26 15:26:44.348009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.027 [2024-11-26 15:26:44.348026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:46.027 [2024-11-26 15:26:44.348036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.027 [2024-11-26 15:26:44.350213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.027 [2024-11-26 15:26:44.350249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:46.027 [2024-11-26 15:26:44.350322] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:46.027 [2024-11-26 15:26:44.350358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:46.027 pt1 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.027 "name": "raid_bdev1", 00:10:46.027 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:46.027 "strip_size_kb": 0, 00:10:46.027 "state": "configuring", 00:10:46.027 "raid_level": "raid1", 00:10:46.027 "superblock": true, 00:10:46.027 "num_base_bdevs": 4, 00:10:46.027 "num_base_bdevs_discovered": 1, 00:10:46.027 "num_base_bdevs_operational": 4, 00:10:46.027 "base_bdevs_list": [ 00:10:46.027 { 00:10:46.027 "name": "pt1", 00:10:46.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.027 "is_configured": true, 00:10:46.027 "data_offset": 2048, 00:10:46.027 "data_size": 63488 00:10:46.027 }, 00:10:46.027 { 00:10:46.027 "name": null, 00:10:46.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.027 "is_configured": false, 00:10:46.027 "data_offset": 2048, 00:10:46.027 "data_size": 63488 00:10:46.027 }, 00:10:46.027 { 00:10:46.027 "name": null, 00:10:46.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.027 "is_configured": false, 00:10:46.027 "data_offset": 2048, 00:10:46.027 "data_size": 63488 00:10:46.027 }, 00:10:46.027 { 00:10:46.027 "name": null, 00:10:46.027 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.027 "is_configured": false, 00:10:46.027 "data_offset": 2048, 00:10:46.027 "data_size": 63488 00:10:46.027 } 00:10:46.027 ] 00:10:46.027 }' 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.027 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.597 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:46.597 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.597 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.597 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.597 [2024-11-26 15:26:44.772090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.597 [2024-11-26 15:26:44.772237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.597 [2024-11-26 15:26:44.772301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:46.597 [2024-11-26 15:26:44.772337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.597 [2024-11-26 15:26:44.772783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.598 [2024-11-26 15:26:44.772852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.598 [2024-11-26 15:26:44.772959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.598 [2024-11-26 15:26:44.773021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.598 pt2 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.598 [2024-11-26 15:26:44.784089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.598 "name": "raid_bdev1", 00:10:46.598 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:46.598 "strip_size_kb": 0, 00:10:46.598 "state": "configuring", 00:10:46.598 "raid_level": "raid1", 00:10:46.598 "superblock": true, 00:10:46.598 "num_base_bdevs": 4, 00:10:46.598 "num_base_bdevs_discovered": 1, 00:10:46.598 "num_base_bdevs_operational": 4, 00:10:46.598 "base_bdevs_list": [ 00:10:46.598 { 00:10:46.598 "name": "pt1", 00:10:46.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.598 "is_configured": true, 00:10:46.598 "data_offset": 2048, 00:10:46.598 "data_size": 63488 00:10:46.598 }, 00:10:46.598 { 00:10:46.598 "name": null, 00:10:46.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.598 "is_configured": false, 00:10:46.598 "data_offset": 0, 00:10:46.598 "data_size": 63488 00:10:46.598 }, 00:10:46.598 { 00:10:46.598 "name": null, 00:10:46.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.598 "is_configured": false, 00:10:46.598 "data_offset": 2048, 00:10:46.598 "data_size": 63488 00:10:46.598 }, 00:10:46.598 { 00:10:46.598 "name": null, 00:10:46.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.598 "is_configured": false, 00:10:46.598 "data_offset": 2048, 00:10:46.598 "data_size": 63488 00:10:46.598 } 00:10:46.598 ] 00:10:46.598 }' 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.598 15:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.858 [2024-11-26 15:26:45.176173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.858 [2024-11-26 15:26:45.176291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.858 [2024-11-26 15:26:45.176328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:46.858 [2024-11-26 15:26:45.176364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.858 [2024-11-26 15:26:45.176799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.858 [2024-11-26 15:26:45.176856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.858 [2024-11-26 15:26:45.176959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.858 [2024-11-26 15:26:45.177008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.858 pt2 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.858 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.858 [2024-11-26 15:26:45.188156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.858 [2024-11-26 15:26:45.188225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.858 [2024-11-26 15:26:45.188243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:46.859 [2024-11-26 15:26:45.188251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.859 [2024-11-26 15:26:45.188604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.859 [2024-11-26 15:26:45.188630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.859 [2024-11-26 15:26:45.188696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:46.859 [2024-11-26 15:26:45.188713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.859 pt3 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.859 [2024-11-26 15:26:45.200150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:46.859 [2024-11-26 15:26:45.200223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.859 [2024-11-26 15:26:45.200238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:46.859 [2024-11-26 15:26:45.200246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.859 [2024-11-26 15:26:45.200547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.859 [2024-11-26 15:26:45.200568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:46.859 [2024-11-26 15:26:45.200623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:46.859 [2024-11-26 15:26:45.200639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:46.859 [2024-11-26 15:26:45.200751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:46.859 [2024-11-26 15:26:45.200759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.859 [2024-11-26 15:26:45.200977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:46.859 [2024-11-26 15:26:45.201101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:46.859 [2024-11-26 15:26:45.201112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:46.859 [2024-11-26 15:26:45.201227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.859 pt4 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.859 "name": "raid_bdev1", 00:10:46.859 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:46.859 "strip_size_kb": 0, 00:10:46.859 "state": "online", 00:10:46.859 "raid_level": "raid1", 00:10:46.859 "superblock": true, 00:10:46.859 "num_base_bdevs": 4, 00:10:46.859 "num_base_bdevs_discovered": 4, 00:10:46.859 "num_base_bdevs_operational": 4, 00:10:46.859 "base_bdevs_list": [ 00:10:46.859 { 00:10:46.859 "name": "pt1", 00:10:46.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.859 "is_configured": true, 00:10:46.859 "data_offset": 2048, 00:10:46.859 "data_size": 63488 00:10:46.859 }, 00:10:46.859 { 00:10:46.859 "name": "pt2", 00:10:46.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.859 "is_configured": true, 00:10:46.859 "data_offset": 2048, 00:10:46.859 "data_size": 63488 00:10:46.859 }, 00:10:46.859 { 00:10:46.859 "name": "pt3", 00:10:46.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.859 "is_configured": true, 00:10:46.859 "data_offset": 2048, 00:10:46.859 "data_size": 63488 00:10:46.859 }, 00:10:46.859 { 00:10:46.859 "name": "pt4", 00:10:46.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.859 "is_configured": true, 00:10:46.859 "data_offset": 2048, 00:10:46.859 "data_size": 63488 00:10:46.859 } 00:10:46.859 ] 00:10:46.859 }' 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.859 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.429 [2024-11-26 15:26:45.636586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.429 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.429 "name": "raid_bdev1", 00:10:47.429 "aliases": [ 00:10:47.429 "ab221e89-2bf6-4f2b-8877-6f1376be5c49" 00:10:47.429 ], 00:10:47.429 "product_name": "Raid Volume", 00:10:47.429 "block_size": 512, 00:10:47.429 "num_blocks": 63488, 00:10:47.429 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:47.429 "assigned_rate_limits": { 00:10:47.429 "rw_ios_per_sec": 0, 00:10:47.429 "rw_mbytes_per_sec": 0, 00:10:47.429 "r_mbytes_per_sec": 0, 00:10:47.429 "w_mbytes_per_sec": 0 00:10:47.429 }, 00:10:47.429 "claimed": false, 00:10:47.429 "zoned": false, 00:10:47.429 "supported_io_types": { 00:10:47.429 "read": true, 00:10:47.429 "write": true, 00:10:47.429 "unmap": false, 00:10:47.429 "flush": false, 00:10:47.429 "reset": true, 00:10:47.429 "nvme_admin": false, 00:10:47.430 "nvme_io": false, 00:10:47.430 "nvme_io_md": false, 00:10:47.430 "write_zeroes": true, 00:10:47.430 "zcopy": false, 00:10:47.430 "get_zone_info": false, 00:10:47.430 "zone_management": false, 00:10:47.430 "zone_append": false, 00:10:47.430 "compare": false, 00:10:47.430 "compare_and_write": false, 00:10:47.430 "abort": false, 00:10:47.430 "seek_hole": false, 00:10:47.430 "seek_data": false, 00:10:47.430 "copy": false, 00:10:47.430 "nvme_iov_md": false 00:10:47.430 }, 00:10:47.430 "memory_domains": [ 00:10:47.430 { 00:10:47.430 "dma_device_id": "system", 00:10:47.430 "dma_device_type": 1 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.430 "dma_device_type": 2 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "system", 00:10:47.430 "dma_device_type": 1 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.430 "dma_device_type": 2 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "system", 00:10:47.430 "dma_device_type": 1 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.430 "dma_device_type": 2 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "system", 00:10:47.430 "dma_device_type": 1 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.430 "dma_device_type": 2 00:10:47.430 } 00:10:47.430 ], 00:10:47.430 "driver_specific": { 00:10:47.430 "raid": { 00:10:47.430 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:47.430 "strip_size_kb": 0, 00:10:47.430 "state": "online", 00:10:47.430 "raid_level": "raid1", 00:10:47.430 "superblock": true, 00:10:47.430 "num_base_bdevs": 4, 00:10:47.430 "num_base_bdevs_discovered": 4, 00:10:47.430 "num_base_bdevs_operational": 4, 00:10:47.430 "base_bdevs_list": [ 00:10:47.430 { 00:10:47.430 "name": "pt1", 00:10:47.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.430 "is_configured": true, 00:10:47.430 "data_offset": 2048, 00:10:47.430 "data_size": 63488 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "name": "pt2", 00:10:47.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.430 "is_configured": true, 00:10:47.430 "data_offset": 2048, 00:10:47.430 "data_size": 63488 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "name": "pt3", 00:10:47.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.430 "is_configured": true, 00:10:47.430 "data_offset": 2048, 00:10:47.430 "data_size": 63488 00:10:47.430 }, 00:10:47.430 { 00:10:47.430 "name": "pt4", 00:10:47.430 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.430 "is_configured": true, 00:10:47.430 "data_offset": 2048, 00:10:47.430 "data_size": 63488 00:10:47.430 } 00:10:47.430 ] 00:10:47.430 } 00:10:47.430 } 00:10:47.430 }' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:47.430 pt2 00:10:47.430 pt3 00:10:47.430 pt4' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.430 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.690 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.690 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.690 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.690 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:47.690 [2024-11-26 15:26:45.908636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab221e89-2bf6-4f2b-8877-6f1376be5c49 '!=' ab221e89-2bf6-4f2b-8877-6f1376be5c49 ']' 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.691 [2024-11-26 15:26:45.956414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.691 15:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.691 "name": "raid_bdev1", 00:10:47.691 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:47.691 "strip_size_kb": 0, 00:10:47.691 "state": "online", 00:10:47.691 "raid_level": "raid1", 00:10:47.691 "superblock": true, 00:10:47.691 "num_base_bdevs": 4, 00:10:47.691 "num_base_bdevs_discovered": 3, 00:10:47.691 "num_base_bdevs_operational": 3, 00:10:47.691 "base_bdevs_list": [ 00:10:47.691 { 00:10:47.691 "name": null, 00:10:47.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.691 "is_configured": false, 00:10:47.691 "data_offset": 0, 00:10:47.691 "data_size": 63488 00:10:47.691 }, 00:10:47.691 { 00:10:47.691 "name": "pt2", 00:10:47.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.691 "is_configured": true, 00:10:47.691 "data_offset": 2048, 00:10:47.691 "data_size": 63488 00:10:47.691 }, 00:10:47.691 { 00:10:47.691 "name": "pt3", 00:10:47.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.691 "is_configured": true, 00:10:47.691 "data_offset": 2048, 00:10:47.691 "data_size": 63488 00:10:47.691 }, 00:10:47.691 { 00:10:47.691 "name": "pt4", 00:10:47.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.691 "is_configured": true, 00:10:47.691 "data_offset": 2048, 00:10:47.691 "data_size": 63488 00:10:47.691 } 00:10:47.691 ] 00:10:47.691 }' 00:10:47.691 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.691 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.951 [2024-11-26 15:26:46.348503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.951 [2024-11-26 15:26:46.348534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.951 [2024-11-26 15:26:46.348614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.951 [2024-11-26 15:26:46.348699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.951 [2024-11-26 15:26:46.348709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.951 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.212 [2024-11-26 15:26:46.444501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.212 [2024-11-26 15:26:46.444553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.212 [2024-11-26 15:26:46.444573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:48.212 [2024-11-26 15:26:46.444581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.212 [2024-11-26 15:26:46.446700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.212 [2024-11-26 15:26:46.446737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.212 [2024-11-26 15:26:46.446819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.212 [2024-11-26 15:26:46.446851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.212 pt2 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.212 "name": "raid_bdev1", 00:10:48.212 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:48.212 "strip_size_kb": 0, 00:10:48.212 "state": "configuring", 00:10:48.212 "raid_level": "raid1", 00:10:48.212 "superblock": true, 00:10:48.212 "num_base_bdevs": 4, 00:10:48.212 "num_base_bdevs_discovered": 1, 00:10:48.212 "num_base_bdevs_operational": 3, 00:10:48.212 "base_bdevs_list": [ 00:10:48.212 { 00:10:48.212 "name": null, 00:10:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.212 "is_configured": false, 00:10:48.212 "data_offset": 2048, 00:10:48.212 "data_size": 63488 00:10:48.212 }, 00:10:48.212 { 00:10:48.212 "name": "pt2", 00:10:48.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.212 "is_configured": true, 00:10:48.212 "data_offset": 2048, 00:10:48.212 "data_size": 63488 00:10:48.212 }, 00:10:48.212 { 00:10:48.212 "name": null, 00:10:48.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.212 "is_configured": false, 00:10:48.212 "data_offset": 2048, 00:10:48.212 "data_size": 63488 00:10:48.212 }, 00:10:48.212 { 00:10:48.212 "name": null, 00:10:48.212 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.212 "is_configured": false, 00:10:48.212 "data_offset": 2048, 00:10:48.212 "data_size": 63488 00:10:48.212 } 00:10:48.212 ] 00:10:48.212 }' 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.212 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 [2024-11-26 15:26:46.832669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.472 [2024-11-26 15:26:46.832797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.472 [2024-11-26 15:26:46.832841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:48.472 [2024-11-26 15:26:46.832881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.472 [2024-11-26 15:26:46.833317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.472 [2024-11-26 15:26:46.833374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.472 [2024-11-26 15:26:46.833478] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:48.472 [2024-11-26 15:26:46.833526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.472 pt3 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.472 "name": "raid_bdev1", 00:10:48.472 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:48.472 "strip_size_kb": 0, 00:10:48.472 "state": "configuring", 00:10:48.472 "raid_level": "raid1", 00:10:48.472 "superblock": true, 00:10:48.472 "num_base_bdevs": 4, 00:10:48.472 "num_base_bdevs_discovered": 2, 00:10:48.472 "num_base_bdevs_operational": 3, 00:10:48.472 "base_bdevs_list": [ 00:10:48.472 { 00:10:48.472 "name": null, 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "is_configured": false, 00:10:48.472 "data_offset": 2048, 00:10:48.472 "data_size": 63488 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": "pt2", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.472 "is_configured": true, 00:10:48.472 "data_offset": 2048, 00:10:48.472 "data_size": 63488 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": "pt3", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.472 "is_configured": true, 00:10:48.472 "data_offset": 2048, 00:10:48.472 "data_size": 63488 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": null, 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.472 "is_configured": false, 00:10:48.472 "data_offset": 2048, 00:10:48.472 "data_size": 63488 00:10:48.472 } 00:10:48.472 ] 00:10:48.472 }' 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.472 15:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.042 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.043 [2024-11-26 15:26:47.284805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:49.043 [2024-11-26 15:26:47.284873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.043 [2024-11-26 15:26:47.284896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:49.043 [2024-11-26 15:26:47.284906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.043 [2024-11-26 15:26:47.285337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.043 [2024-11-26 15:26:47.285355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:49.043 [2024-11-26 15:26:47.285436] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:49.043 [2024-11-26 15:26:47.285465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:49.043 [2024-11-26 15:26:47.285575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.043 [2024-11-26 15:26:47.285584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:49.043 [2024-11-26 15:26:47.285820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:10:49.043 [2024-11-26 15:26:47.285955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.043 [2024-11-26 15:26:47.285967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:49.043 [2024-11-26 15:26:47.286072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.043 pt4 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.043 "name": "raid_bdev1", 00:10:49.043 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:49.043 "strip_size_kb": 0, 00:10:49.043 "state": "online", 00:10:49.043 "raid_level": "raid1", 00:10:49.043 "superblock": true, 00:10:49.043 "num_base_bdevs": 4, 00:10:49.043 "num_base_bdevs_discovered": 3, 00:10:49.043 "num_base_bdevs_operational": 3, 00:10:49.043 "base_bdevs_list": [ 00:10:49.043 { 00:10:49.043 "name": null, 00:10:49.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.043 "is_configured": false, 00:10:49.043 "data_offset": 2048, 00:10:49.043 "data_size": 63488 00:10:49.043 }, 00:10:49.043 { 00:10:49.043 "name": "pt2", 00:10:49.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.043 "is_configured": true, 00:10:49.043 "data_offset": 2048, 00:10:49.043 "data_size": 63488 00:10:49.043 }, 00:10:49.043 { 00:10:49.043 "name": "pt3", 00:10:49.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.043 "is_configured": true, 00:10:49.043 "data_offset": 2048, 00:10:49.043 "data_size": 63488 00:10:49.043 }, 00:10:49.043 { 00:10:49.043 "name": "pt4", 00:10:49.043 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.043 "is_configured": true, 00:10:49.043 "data_offset": 2048, 00:10:49.043 "data_size": 63488 00:10:49.043 } 00:10:49.043 ] 00:10:49.043 }' 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.043 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.303 [2024-11-26 15:26:47.728905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.303 [2024-11-26 15:26:47.728989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.303 [2024-11-26 15:26:47.729093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.303 [2024-11-26 15:26:47.729214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.303 [2024-11-26 15:26:47.729266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.303 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.563 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.563 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.563 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.563 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.563 [2024-11-26 15:26:47.788906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.563 [2024-11-26 15:26:47.788970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.563 [2024-11-26 15:26:47.789003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:49.563 [2024-11-26 15:26:47.789015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.563 [2024-11-26 15:26:47.791322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.563 [2024-11-26 15:26:47.791361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.563 [2024-11-26 15:26:47.791432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:49.563 [2024-11-26 15:26:47.791467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.563 [2024-11-26 15:26:47.791576] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:49.563 [2024-11-26 15:26:47.791596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.563 [2024-11-26 15:26:47.791615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:10:49.563 [2024-11-26 15:26:47.791656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.564 [2024-11-26 15:26:47.791742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.564 pt1 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.564 "name": "raid_bdev1", 00:10:49.564 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:49.564 "strip_size_kb": 0, 00:10:49.564 "state": "configuring", 00:10:49.564 "raid_level": "raid1", 00:10:49.564 "superblock": true, 00:10:49.564 "num_base_bdevs": 4, 00:10:49.564 "num_base_bdevs_discovered": 2, 00:10:49.564 "num_base_bdevs_operational": 3, 00:10:49.564 "base_bdevs_list": [ 00:10:49.564 { 00:10:49.564 "name": null, 00:10:49.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.564 "is_configured": false, 00:10:49.564 "data_offset": 2048, 00:10:49.564 "data_size": 63488 00:10:49.564 }, 00:10:49.564 { 00:10:49.564 "name": "pt2", 00:10:49.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.564 "is_configured": true, 00:10:49.564 "data_offset": 2048, 00:10:49.564 "data_size": 63488 00:10:49.564 }, 00:10:49.564 { 00:10:49.564 "name": "pt3", 00:10:49.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.564 "is_configured": true, 00:10:49.564 "data_offset": 2048, 00:10:49.564 "data_size": 63488 00:10:49.564 }, 00:10:49.564 { 00:10:49.564 "name": null, 00:10:49.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.564 "is_configured": false, 00:10:49.564 "data_offset": 2048, 00:10:49.564 "data_size": 63488 00:10:49.564 } 00:10:49.564 ] 00:10:49.564 }' 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.564 15:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.824 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:49.824 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:49.824 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.825 [2024-11-26 15:26:48.225034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:49.825 [2024-11-26 15:26:48.225147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.825 [2024-11-26 15:26:48.225196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:49.825 [2024-11-26 15:26:48.225226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.825 [2024-11-26 15:26:48.225659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.825 [2024-11-26 15:26:48.225716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:49.825 [2024-11-26 15:26:48.225817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:49.825 [2024-11-26 15:26:48.225864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:49.825 [2024-11-26 15:26:48.225989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:49.825 [2024-11-26 15:26:48.226025] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:49.825 [2024-11-26 15:26:48.226316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:49.825 [2024-11-26 15:26:48.226477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:49.825 [2024-11-26 15:26:48.226519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:49.825 [2024-11-26 15:26:48.226653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.825 pt4 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.825 "name": "raid_bdev1", 00:10:49.825 "uuid": "ab221e89-2bf6-4f2b-8877-6f1376be5c49", 00:10:49.825 "strip_size_kb": 0, 00:10:49.825 "state": "online", 00:10:49.825 "raid_level": "raid1", 00:10:49.825 "superblock": true, 00:10:49.825 "num_base_bdevs": 4, 00:10:49.825 "num_base_bdevs_discovered": 3, 00:10:49.825 "num_base_bdevs_operational": 3, 00:10:49.825 "base_bdevs_list": [ 00:10:49.825 { 00:10:49.825 "name": null, 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.825 "is_configured": false, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 }, 00:10:49.825 { 00:10:49.825 "name": "pt2", 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.825 "is_configured": true, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 }, 00:10:49.825 { 00:10:49.825 "name": "pt3", 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.825 "is_configured": true, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 }, 00:10:49.825 { 00:10:49.825 "name": "pt4", 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.825 "is_configured": true, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 } 00:10:49.825 ] 00:10:49.825 }' 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.825 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.395 [2024-11-26 15:26:48.677529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ab221e89-2bf6-4f2b-8877-6f1376be5c49 '!=' ab221e89-2bf6-4f2b-8877-6f1376be5c49 ']' 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 86830 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 86830 ']' 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 86830 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86830 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86830' 00:10:50.395 killing process with pid 86830 00:10:50.395 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 86830 00:10:50.395 [2024-11-26 15:26:48.747678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.395 [2024-11-26 15:26:48.747821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.395 [2024-11-26 15:26:48.747930] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 15:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 86830 00:10:50.395 ee all in destruct 00:10:50.395 [2024-11-26 15:26:48.747988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:50.395 [2024-11-26 15:26:48.791915] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.655 15:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:50.655 00:10:50.655 real 0m6.754s 00:10:50.655 user 0m11.360s 00:10:50.655 sys 0m1.400s 00:10:50.655 15:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.655 15:26:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.655 ************************************ 00:10:50.655 END TEST raid_superblock_test 00:10:50.655 ************************************ 00:10:50.655 15:26:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:50.655 15:26:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.655 15:26:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.655 15:26:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.655 ************************************ 00:10:50.655 START TEST raid_read_error_test 00:10:50.655 ************************************ 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.655 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aihaxXkQhF 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87301 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87301 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 87301 ']' 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.656 15:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.919 [2024-11-26 15:26:49.184452] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:50.919 [2024-11-26 15:26:49.184668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87301 ] 00:10:50.919 [2024-11-26 15:26:49.318656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:50.919 [2024-11-26 15:26:49.342805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.919 [2024-11-26 15:26:49.368499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.179 [2024-11-26 15:26:49.412207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.179 [2024-11-26 15:26:49.412240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.748 BaseBdev1_malloc 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.748 true 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.748 [2024-11-26 15:26:50.040410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.748 [2024-11-26 15:26:50.040514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.748 [2024-11-26 15:26:50.040541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:51.748 [2024-11-26 15:26:50.040561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.748 [2024-11-26 15:26:50.042729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.748 [2024-11-26 15:26:50.042786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.748 BaseBdev1 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.748 BaseBdev2_malloc 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.748 true 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.748 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.748 [2024-11-26 15:26:50.077424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.748 [2024-11-26 15:26:50.077477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.749 [2024-11-26 15:26:50.077507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.749 [2024-11-26 15:26:50.077517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.749 [2024-11-26 15:26:50.079626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.749 [2024-11-26 15:26:50.079662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.749 BaseBdev2 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 BaseBdev3_malloc 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 true 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 [2024-11-26 15:26:50.118148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:51.749 [2024-11-26 15:26:50.118251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.749 [2024-11-26 15:26:50.118289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:51.749 [2024-11-26 15:26:50.118300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.749 [2024-11-26 15:26:50.120355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.749 [2024-11-26 15:26:50.120394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:51.749 BaseBdev3 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 BaseBdev4_malloc 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 true 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 [2024-11-26 15:26:50.178853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:51.749 [2024-11-26 15:26:50.178956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.749 [2024-11-26 15:26:50.178978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:51.749 [2024-11-26 15:26:50.178989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.749 [2024-11-26 15:26:50.181061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.749 [2024-11-26 15:26:50.181105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:51.749 BaseBdev4 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 [2024-11-26 15:26:50.190884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.749 [2024-11-26 15:26:50.192768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.749 [2024-11-26 15:26:50.192846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.749 [2024-11-26 15:26:50.192900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.749 [2024-11-26 15:26:50.193104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:51.749 [2024-11-26 15:26:50.193117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.749 [2024-11-26 15:26:50.193401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:51.749 [2024-11-26 15:26:50.193545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:51.749 [2024-11-26 15:26:50.193555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:51.749 [2024-11-26 15:26:50.193687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.749 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.009 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.009 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.009 "name": "raid_bdev1", 00:10:52.009 "uuid": "b14e5ed2-653f-4932-9eae-768595d965b0", 00:10:52.009 "strip_size_kb": 0, 00:10:52.009 "state": "online", 00:10:52.009 "raid_level": "raid1", 00:10:52.009 "superblock": true, 00:10:52.009 "num_base_bdevs": 4, 00:10:52.009 "num_base_bdevs_discovered": 4, 00:10:52.009 "num_base_bdevs_operational": 4, 00:10:52.009 "base_bdevs_list": [ 00:10:52.009 { 00:10:52.009 "name": "BaseBdev1", 00:10:52.009 "uuid": "e22a8282-1256-5b31-a2a6-ff5a988ad2cf", 00:10:52.009 "is_configured": true, 00:10:52.009 "data_offset": 2048, 00:10:52.009 "data_size": 63488 00:10:52.009 }, 00:10:52.009 { 00:10:52.009 "name": "BaseBdev2", 00:10:52.009 "uuid": "4e497287-9c5f-5ffb-be9d-9426cfc9896c", 00:10:52.009 "is_configured": true, 00:10:52.009 "data_offset": 2048, 00:10:52.009 "data_size": 63488 00:10:52.009 }, 00:10:52.009 { 00:10:52.009 "name": "BaseBdev3", 00:10:52.009 "uuid": "b36e0291-4f71-5bfe-bc1b-c11bca33b575", 00:10:52.009 "is_configured": true, 00:10:52.009 "data_offset": 2048, 00:10:52.009 "data_size": 63488 00:10:52.009 }, 00:10:52.009 { 00:10:52.009 "name": "BaseBdev4", 00:10:52.009 "uuid": "bdf63187-3329-5710-8193-220a081b100b", 00:10:52.009 "is_configured": true, 00:10:52.009 "data_offset": 2048, 00:10:52.009 "data_size": 63488 00:10:52.009 } 00:10:52.009 ] 00:10:52.009 }' 00:10:52.009 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.009 15:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.270 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:52.270 15:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:52.270 [2024-11-26 15:26:50.715446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.208 15:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.468 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.468 "name": "raid_bdev1", 00:10:53.468 "uuid": "b14e5ed2-653f-4932-9eae-768595d965b0", 00:10:53.468 "strip_size_kb": 0, 00:10:53.468 "state": "online", 00:10:53.468 "raid_level": "raid1", 00:10:53.468 "superblock": true, 00:10:53.468 "num_base_bdevs": 4, 00:10:53.468 "num_base_bdevs_discovered": 4, 00:10:53.468 "num_base_bdevs_operational": 4, 00:10:53.469 "base_bdevs_list": [ 00:10:53.469 { 00:10:53.469 "name": "BaseBdev1", 00:10:53.469 "uuid": "e22a8282-1256-5b31-a2a6-ff5a988ad2cf", 00:10:53.469 "is_configured": true, 00:10:53.469 "data_offset": 2048, 00:10:53.469 "data_size": 63488 00:10:53.469 }, 00:10:53.469 { 00:10:53.469 "name": "BaseBdev2", 00:10:53.469 "uuid": "4e497287-9c5f-5ffb-be9d-9426cfc9896c", 00:10:53.469 "is_configured": true, 00:10:53.469 "data_offset": 2048, 00:10:53.469 "data_size": 63488 00:10:53.469 }, 00:10:53.469 { 00:10:53.469 "name": "BaseBdev3", 00:10:53.469 "uuid": "b36e0291-4f71-5bfe-bc1b-c11bca33b575", 00:10:53.469 "is_configured": true, 00:10:53.469 "data_offset": 2048, 00:10:53.469 "data_size": 63488 00:10:53.469 }, 00:10:53.469 { 00:10:53.469 "name": "BaseBdev4", 00:10:53.469 "uuid": "bdf63187-3329-5710-8193-220a081b100b", 00:10:53.469 "is_configured": true, 00:10:53.469 "data_offset": 2048, 00:10:53.469 "data_size": 63488 00:10:53.469 } 00:10:53.469 ] 00:10:53.469 }' 00:10:53.469 15:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.469 15:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.729 [2024-11-26 15:26:52.097572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.729 [2024-11-26 15:26:52.097608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.729 [2024-11-26 15:26:52.100410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.729 [2024-11-26 15:26:52.100467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.729 [2024-11-26 15:26:52.100585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.729 [2024-11-26 15:26:52.100600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:53.729 { 00:10:53.729 "results": [ 00:10:53.729 { 00:10:53.729 "job": "raid_bdev1", 00:10:53.729 "core_mask": "0x1", 00:10:53.729 "workload": "randrw", 00:10:53.729 "percentage": 50, 00:10:53.729 "status": "finished", 00:10:53.729 "queue_depth": 1, 00:10:53.729 "io_size": 131072, 00:10:53.729 "runtime": 1.380188, 00:10:53.729 "iops": 11337.585894095588, 00:10:53.729 "mibps": 1417.1982367619485, 00:10:53.729 "io_failed": 0, 00:10:53.729 "io_timeout": 0, 00:10:53.729 "avg_latency_us": 85.57230036128212, 00:10:53.729 "min_latency_us": 23.763618931404167, 00:10:53.729 "max_latency_us": 1492.3106423777565 00:10:53.729 } 00:10:53.729 ], 00:10:53.729 "core_count": 1 00:10:53.729 } 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87301 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 87301 ']' 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 87301 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87301 00:10:53.729 killing process with pid 87301 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87301' 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 87301 00:10:53.729 [2024-11-26 15:26:52.136347] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.729 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 87301 00:10:53.729 [2024-11-26 15:26:52.171820] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aihaxXkQhF 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:53.989 ************************************ 00:10:53.989 END TEST raid_read_error_test 00:10:53.989 ************************************ 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:53.989 00:10:53.989 real 0m3.312s 00:10:53.989 user 0m4.181s 00:10:53.989 sys 0m0.538s 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.989 15:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.989 15:26:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:53.989 15:26:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:53.989 15:26:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.989 15:26:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.989 ************************************ 00:10:53.989 START TEST raid_write_error_test 00:10:53.989 ************************************ 00:10:53.990 15:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:10:53.990 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:53.990 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:53.990 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3MLYrubR4O 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87430 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87430 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 87430 ']' 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.250 15:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.250 [2024-11-26 15:26:52.565966] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:54.250 [2024-11-26 15:26:52.566161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87430 ] 00:10:54.250 [2024-11-26 15:26:52.723102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:54.510 [2024-11-26 15:26:52.762309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.510 [2024-11-26 15:26:52.788310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.510 [2024-11-26 15:26:52.831772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.510 [2024-11-26 15:26:52.831891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 BaseBdev1_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 true 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 [2024-11-26 15:26:53.435976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.082 [2024-11-26 15:26:53.436077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.082 [2024-11-26 15:26:53.436130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:55.082 [2024-11-26 15:26:53.436168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.082 [2024-11-26 15:26:53.438469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.082 [2024-11-26 15:26:53.438543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.082 BaseBdev1 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 BaseBdev2_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 true 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 [2024-11-26 15:26:53.476843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.082 [2024-11-26 15:26:53.476910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.082 [2024-11-26 15:26:53.476941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.082 [2024-11-26 15:26:53.476951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.082 [2024-11-26 15:26:53.479070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.082 [2024-11-26 15:26:53.479107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.082 BaseBdev2 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 BaseBdev3_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 true 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.082 [2024-11-26 15:26:53.517610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.082 [2024-11-26 15:26:53.517664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.082 [2024-11-26 15:26:53.517681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.082 [2024-11-26 15:26:53.517691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.082 [2024-11-26 15:26:53.519785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.082 [2024-11-26 15:26:53.519877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.082 BaseBdev3 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.082 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.343 BaseBdev4_malloc 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.343 true 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.343 [2024-11-26 15:26:53.575595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.343 [2024-11-26 15:26:53.575756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.343 [2024-11-26 15:26:53.575792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.343 [2024-11-26 15:26:53.575810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.343 [2024-11-26 15:26:53.578224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.343 [2024-11-26 15:26:53.578286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.343 BaseBdev4 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.343 [2024-11-26 15:26:53.587574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.343 [2024-11-26 15:26:53.589575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.343 [2024-11-26 15:26:53.589658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.343 [2024-11-26 15:26:53.589719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.343 [2024-11-26 15:26:53.589939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.343 [2024-11-26 15:26:53.589958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.343 [2024-11-26 15:26:53.590226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:55.343 [2024-11-26 15:26:53.590371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.343 [2024-11-26 15:26:53.590386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:55.343 [2024-11-26 15:26:53.590523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.343 "name": "raid_bdev1", 00:10:55.343 "uuid": "aea07340-fa4b-4e8d-9223-958703624f0e", 00:10:55.343 "strip_size_kb": 0, 00:10:55.343 "state": "online", 00:10:55.343 "raid_level": "raid1", 00:10:55.343 "superblock": true, 00:10:55.343 "num_base_bdevs": 4, 00:10:55.343 "num_base_bdevs_discovered": 4, 00:10:55.343 "num_base_bdevs_operational": 4, 00:10:55.343 "base_bdevs_list": [ 00:10:55.343 { 00:10:55.343 "name": "BaseBdev1", 00:10:55.343 "uuid": "5afd6fed-8f80-5eaa-99c9-a9f2a5be3a54", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "name": "BaseBdev2", 00:10:55.343 "uuid": "3dc14bc9-dcef-5548-877e-fe3e1041139a", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "name": "BaseBdev3", 00:10:55.343 "uuid": "f549b0c8-1464-57d9-b360-c2a42d704d49", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 }, 00:10:55.343 { 00:10:55.343 "name": "BaseBdev4", 00:10:55.343 "uuid": "3daf6174-fcd3-5053-8d11-9676ef80f322", 00:10:55.343 "is_configured": true, 00:10:55.343 "data_offset": 2048, 00:10:55.343 "data_size": 63488 00:10:55.343 } 00:10:55.343 ] 00:10:55.343 }' 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.343 15:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.603 15:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.603 15:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.863 [2024-11-26 15:26:54.112104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.803 [2024-11-26 15:26:55.042376] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:56.803 [2024-11-26 15:26:55.042434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.803 [2024-11-26 15:26:55.042680] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006e50 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.803 "name": "raid_bdev1", 00:10:56.803 "uuid": "aea07340-fa4b-4e8d-9223-958703624f0e", 00:10:56.803 "strip_size_kb": 0, 00:10:56.803 "state": "online", 00:10:56.803 "raid_level": "raid1", 00:10:56.803 "superblock": true, 00:10:56.803 "num_base_bdevs": 4, 00:10:56.803 "num_base_bdevs_discovered": 3, 00:10:56.803 "num_base_bdevs_operational": 3, 00:10:56.803 "base_bdevs_list": [ 00:10:56.803 { 00:10:56.803 "name": null, 00:10:56.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.803 "is_configured": false, 00:10:56.803 "data_offset": 0, 00:10:56.803 "data_size": 63488 00:10:56.803 }, 00:10:56.803 { 00:10:56.803 "name": "BaseBdev2", 00:10:56.803 "uuid": "3dc14bc9-dcef-5548-877e-fe3e1041139a", 00:10:56.803 "is_configured": true, 00:10:56.803 "data_offset": 2048, 00:10:56.803 "data_size": 63488 00:10:56.803 }, 00:10:56.803 { 00:10:56.803 "name": "BaseBdev3", 00:10:56.803 "uuid": "f549b0c8-1464-57d9-b360-c2a42d704d49", 00:10:56.803 "is_configured": true, 00:10:56.803 "data_offset": 2048, 00:10:56.803 "data_size": 63488 00:10:56.803 }, 00:10:56.803 { 00:10:56.803 "name": "BaseBdev4", 00:10:56.803 "uuid": "3daf6174-fcd3-5053-8d11-9676ef80f322", 00:10:56.803 "is_configured": true, 00:10:56.803 "data_offset": 2048, 00:10:56.803 "data_size": 63488 00:10:56.803 } 00:10:56.803 ] 00:10:56.803 }' 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.803 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.063 [2024-11-26 15:26:55.492679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.063 [2024-11-26 15:26:55.492780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.063 [2024-11-26 15:26:55.495348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.063 [2024-11-26 15:26:55.495445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.063 [2024-11-26 15:26:55.495583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.063 [2024-11-26 15:26:55.495629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:57.063 { 00:10:57.063 "results": [ 00:10:57.063 { 00:10:57.063 "job": "raid_bdev1", 00:10:57.063 "core_mask": "0x1", 00:10:57.063 "workload": "randrw", 00:10:57.063 "percentage": 50, 00:10:57.063 "status": "finished", 00:10:57.063 "queue_depth": 1, 00:10:57.063 "io_size": 131072, 00:10:57.063 "runtime": 1.378658, 00:10:57.063 "iops": 12105.25017807172, 00:10:57.063 "mibps": 1513.156272258965, 00:10:57.063 "io_failed": 0, 00:10:57.063 "io_timeout": 0, 00:10:57.063 "avg_latency_us": 79.97364055133086, 00:10:57.063 "min_latency_us": 23.875185217467095, 00:10:57.063 "max_latency_us": 1685.0971846945001 00:10:57.063 } 00:10:57.063 ], 00:10:57.063 "core_count": 1 00:10:57.063 } 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87430 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 87430 ']' 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 87430 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.063 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87430 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87430' 00:10:57.323 killing process with pid 87430 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 87430 00:10:57.323 [2024-11-26 15:26:55.541798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 87430 00:10:57.323 [2024-11-26 15:26:55.577526] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3MLYrubR4O 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:57.323 00:10:57.323 real 0m3.336s 00:10:57.323 user 0m4.170s 00:10:57.323 sys 0m0.558s 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.323 ************************************ 00:10:57.323 END TEST raid_write_error_test 00:10:57.323 ************************************ 00:10:57.323 15:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.583 15:26:55 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:57.584 15:26:55 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:57.584 15:26:55 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:57.584 15:26:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:57.584 15:26:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.584 15:26:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.584 ************************************ 00:10:57.584 START TEST raid_rebuild_test 00:10:57.584 ************************************ 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87557 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87557 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 87557 ']' 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.584 15:26:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.584 [2024-11-26 15:26:55.968349] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:57.584 [2024-11-26 15:26:55.968547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:10:57.584 Zero copy mechanism will not be used. 00:10:57.584 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87557 ] 00:10:57.844 [2024-11-26 15:26:56.104866] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:57.844 [2024-11-26 15:26:56.142728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.844 [2024-11-26 15:26:56.169795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.844 [2024-11-26 15:26:56.213610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.844 [2024-11-26 15:26:56.213649] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 BaseBdev1_malloc 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 [2024-11-26 15:26:56.813780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:58.415 [2024-11-26 15:26:56.813849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.415 [2024-11-26 15:26:56.813886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:58.415 [2024-11-26 15:26:56.813899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.415 [2024-11-26 15:26:56.815994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.415 [2024-11-26 15:26:56.816074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.415 BaseBdev1 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 BaseBdev2_malloc 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 [2024-11-26 15:26:56.842434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:58.415 [2024-11-26 15:26:56.842529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.415 [2024-11-26 15:26:56.842553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:58.415 [2024-11-26 15:26:56.842563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.415 [2024-11-26 15:26:56.844587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.415 [2024-11-26 15:26:56.844626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.415 BaseBdev2 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 spare_malloc 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 spare_delay 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.415 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 [2024-11-26 15:26:56.883138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:58.415 [2024-11-26 15:26:56.883214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.415 [2024-11-26 15:26:56.883234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:58.415 [2024-11-26 15:26:56.883247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.415 [2024-11-26 15:26:56.885370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.415 [2024-11-26 15:26:56.885451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:58.675 spare 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.675 [2024-11-26 15:26:56.895216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.675 [2024-11-26 15:26:56.897082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.675 [2024-11-26 15:26:56.897236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:58.675 [2024-11-26 15:26:56.897255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:58.675 [2024-11-26 15:26:56.897502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:58.675 [2024-11-26 15:26:56.897642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:58.675 [2024-11-26 15:26:56.897653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:58.675 [2024-11-26 15:26:56.897775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.675 "name": "raid_bdev1", 00:10:58.675 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:10:58.675 "strip_size_kb": 0, 00:10:58.675 "state": "online", 00:10:58.675 "raid_level": "raid1", 00:10:58.675 "superblock": false, 00:10:58.675 "num_base_bdevs": 2, 00:10:58.675 "num_base_bdevs_discovered": 2, 00:10:58.675 "num_base_bdevs_operational": 2, 00:10:58.675 "base_bdevs_list": [ 00:10:58.675 { 00:10:58.675 "name": "BaseBdev1", 00:10:58.675 "uuid": "99b87839-a573-53c3-85f5-7f06e04ae87a", 00:10:58.675 "is_configured": true, 00:10:58.675 "data_offset": 0, 00:10:58.675 "data_size": 65536 00:10:58.675 }, 00:10:58.675 { 00:10:58.675 "name": "BaseBdev2", 00:10:58.675 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:10:58.675 "is_configured": true, 00:10:58.675 "data_offset": 0, 00:10:58.675 "data_size": 65536 00:10:58.675 } 00:10:58.675 ] 00:10:58.675 }' 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.675 15:26:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.936 [2024-11-26 15:26:57.319582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:58.936 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:59.196 [2024-11-26 15:26:57.579458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:59.196 /dev/nbd0 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.196 1+0 records in 00:10:59.196 1+0 records out 00:10:59.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393868 s, 10.4 MB/s 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:59.196 15:26:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:03.392 65536+0 records in 00:11:03.392 65536+0 records out 00:11:03.392 33554432 bytes (34 MB, 32 MiB) copied, 3.78938 s, 8.9 MB/s 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:03.392 [2024-11-26 15:27:01.625105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.392 [2024-11-26 15:27:01.657237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.392 "name": "raid_bdev1", 00:11:03.392 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:03.392 "strip_size_kb": 0, 00:11:03.392 "state": "online", 00:11:03.392 "raid_level": "raid1", 00:11:03.392 "superblock": false, 00:11:03.392 "num_base_bdevs": 2, 00:11:03.392 "num_base_bdevs_discovered": 1, 00:11:03.392 "num_base_bdevs_operational": 1, 00:11:03.392 "base_bdevs_list": [ 00:11:03.392 { 00:11:03.392 "name": null, 00:11:03.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.392 "is_configured": false, 00:11:03.392 "data_offset": 0, 00:11:03.392 "data_size": 65536 00:11:03.392 }, 00:11:03.392 { 00:11:03.392 "name": "BaseBdev2", 00:11:03.392 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:03.392 "is_configured": true, 00:11:03.392 "data_offset": 0, 00:11:03.392 "data_size": 65536 00:11:03.392 } 00:11:03.392 ] 00:11:03.392 }' 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.392 15:27:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 15:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:03.665 15:27:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.665 15:27:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.665 [2024-11-26 15:27:02.097351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:03.665 [2024-11-26 15:27:02.116374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09fe0 00:11:03.665 15:27:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.665 15:27:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:03.665 [2024-11-26 15:27:02.119115] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.063 "name": "raid_bdev1", 00:11:05.063 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:05.063 "strip_size_kb": 0, 00:11:05.063 "state": "online", 00:11:05.063 "raid_level": "raid1", 00:11:05.063 "superblock": false, 00:11:05.063 "num_base_bdevs": 2, 00:11:05.063 "num_base_bdevs_discovered": 2, 00:11:05.063 "num_base_bdevs_operational": 2, 00:11:05.063 "process": { 00:11:05.063 "type": "rebuild", 00:11:05.063 "target": "spare", 00:11:05.063 "progress": { 00:11:05.063 "blocks": 20480, 00:11:05.063 "percent": 31 00:11:05.063 } 00:11:05.063 }, 00:11:05.063 "base_bdevs_list": [ 00:11:05.063 { 00:11:05.063 "name": "spare", 00:11:05.063 "uuid": "14cd5896-4387-59d6-8720-da692bbf7245", 00:11:05.063 "is_configured": true, 00:11:05.063 "data_offset": 0, 00:11:05.063 "data_size": 65536 00:11:05.063 }, 00:11:05.063 { 00:11:05.063 "name": "BaseBdev2", 00:11:05.063 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:05.063 "is_configured": true, 00:11:05.063 "data_offset": 0, 00:11:05.063 "data_size": 65536 00:11:05.063 } 00:11:05.063 ] 00:11:05.063 }' 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.063 [2024-11-26 15:27:03.277469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:05.063 [2024-11-26 15:27:03.331205] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:05.063 [2024-11-26 15:27:03.331763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.063 [2024-11-26 15:27:03.331793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:05.063 [2024-11-26 15:27:03.331807] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.063 "name": "raid_bdev1", 00:11:05.063 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:05.063 "strip_size_kb": 0, 00:11:05.063 "state": "online", 00:11:05.063 "raid_level": "raid1", 00:11:05.063 "superblock": false, 00:11:05.063 "num_base_bdevs": 2, 00:11:05.063 "num_base_bdevs_discovered": 1, 00:11:05.063 "num_base_bdevs_operational": 1, 00:11:05.063 "base_bdevs_list": [ 00:11:05.063 { 00:11:05.063 "name": null, 00:11:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.063 "is_configured": false, 00:11:05.063 "data_offset": 0, 00:11:05.063 "data_size": 65536 00:11:05.063 }, 00:11:05.063 { 00:11:05.063 "name": "BaseBdev2", 00:11:05.063 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:05.063 "is_configured": true, 00:11:05.063 "data_offset": 0, 00:11:05.063 "data_size": 65536 00:11:05.063 } 00:11:05.063 ] 00:11:05.063 }' 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.063 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.323 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.583 "name": "raid_bdev1", 00:11:05.583 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:05.583 "strip_size_kb": 0, 00:11:05.583 "state": "online", 00:11:05.583 "raid_level": "raid1", 00:11:05.583 "superblock": false, 00:11:05.583 "num_base_bdevs": 2, 00:11:05.583 "num_base_bdevs_discovered": 1, 00:11:05.583 "num_base_bdevs_operational": 1, 00:11:05.583 "base_bdevs_list": [ 00:11:05.583 { 00:11:05.583 "name": null, 00:11:05.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.583 "is_configured": false, 00:11:05.583 "data_offset": 0, 00:11:05.583 "data_size": 65536 00:11:05.583 }, 00:11:05.583 { 00:11:05.583 "name": "BaseBdev2", 00:11:05.583 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:05.583 "is_configured": true, 00:11:05.583 "data_offset": 0, 00:11:05.583 "data_size": 65536 00:11:05.583 } 00:11:05.583 ] 00:11:05.583 }' 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.583 [2024-11-26 15:27:03.904822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:05.583 [2024-11-26 15:27:03.913701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a0b0 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.583 15:27:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:05.583 [2024-11-26 15:27:03.915855] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:06.523 "name": "raid_bdev1", 00:11:06.523 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:06.523 "strip_size_kb": 0, 00:11:06.523 "state": "online", 00:11:06.523 "raid_level": "raid1", 00:11:06.523 "superblock": false, 00:11:06.523 "num_base_bdevs": 2, 00:11:06.523 "num_base_bdevs_discovered": 2, 00:11:06.523 "num_base_bdevs_operational": 2, 00:11:06.523 "process": { 00:11:06.523 "type": "rebuild", 00:11:06.523 "target": "spare", 00:11:06.523 "progress": { 00:11:06.523 "blocks": 20480, 00:11:06.523 "percent": 31 00:11:06.523 } 00:11:06.523 }, 00:11:06.523 "base_bdevs_list": [ 00:11:06.523 { 00:11:06.523 "name": "spare", 00:11:06.523 "uuid": "14cd5896-4387-59d6-8720-da692bbf7245", 00:11:06.523 "is_configured": true, 00:11:06.523 "data_offset": 0, 00:11:06.523 "data_size": 65536 00:11:06.523 }, 00:11:06.523 { 00:11:06.523 "name": "BaseBdev2", 00:11:06.523 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:06.523 "is_configured": true, 00:11:06.523 "data_offset": 0, 00:11:06.523 "data_size": 65536 00:11:06.523 } 00:11:06.523 ] 00:11:06.523 }' 00:11:06.523 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:06.783 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:06.784 15:27:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=284 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:06.784 "name": "raid_bdev1", 00:11:06.784 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:06.784 "strip_size_kb": 0, 00:11:06.784 "state": "online", 00:11:06.784 "raid_level": "raid1", 00:11:06.784 "superblock": false, 00:11:06.784 "num_base_bdevs": 2, 00:11:06.784 "num_base_bdevs_discovered": 2, 00:11:06.784 "num_base_bdevs_operational": 2, 00:11:06.784 "process": { 00:11:06.784 "type": "rebuild", 00:11:06.784 "target": "spare", 00:11:06.784 "progress": { 00:11:06.784 "blocks": 22528, 00:11:06.784 "percent": 34 00:11:06.784 } 00:11:06.784 }, 00:11:06.784 "base_bdevs_list": [ 00:11:06.784 { 00:11:06.784 "name": "spare", 00:11:06.784 "uuid": "14cd5896-4387-59d6-8720-da692bbf7245", 00:11:06.784 "is_configured": true, 00:11:06.784 "data_offset": 0, 00:11:06.784 "data_size": 65536 00:11:06.784 }, 00:11:06.784 { 00:11:06.784 "name": "BaseBdev2", 00:11:06.784 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:06.784 "is_configured": true, 00:11:06.784 "data_offset": 0, 00:11:06.784 "data_size": 65536 00:11:06.784 } 00:11:06.784 ] 00:11:06.784 }' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:06.784 15:27:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.721 15:27:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.979 15:27:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.979 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.979 "name": "raid_bdev1", 00:11:07.979 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:07.979 "strip_size_kb": 0, 00:11:07.979 "state": "online", 00:11:07.979 "raid_level": "raid1", 00:11:07.979 "superblock": false, 00:11:07.979 "num_base_bdevs": 2, 00:11:07.979 "num_base_bdevs_discovered": 2, 00:11:07.979 "num_base_bdevs_operational": 2, 00:11:07.979 "process": { 00:11:07.979 "type": "rebuild", 00:11:07.979 "target": "spare", 00:11:07.979 "progress": { 00:11:07.979 "blocks": 45056, 00:11:07.979 "percent": 68 00:11:07.979 } 00:11:07.979 }, 00:11:07.979 "base_bdevs_list": [ 00:11:07.979 { 00:11:07.979 "name": "spare", 00:11:07.979 "uuid": "14cd5896-4387-59d6-8720-da692bbf7245", 00:11:07.979 "is_configured": true, 00:11:07.979 "data_offset": 0, 00:11:07.979 "data_size": 65536 00:11:07.979 }, 00:11:07.979 { 00:11:07.979 "name": "BaseBdev2", 00:11:07.979 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:07.979 "is_configured": true, 00:11:07.979 "data_offset": 0, 00:11:07.979 "data_size": 65536 00:11:07.979 } 00:11:07.979 ] 00:11:07.979 }' 00:11:07.979 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.979 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:07.979 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.979 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:07.979 15:27:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:08.913 [2024-11-26 15:27:07.145625] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:08.913 [2024-11-26 15:27:07.145735] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:08.913 [2024-11-26 15:27:07.145795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.913 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.181 "name": "raid_bdev1", 00:11:09.181 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:09.181 "strip_size_kb": 0, 00:11:09.181 "state": "online", 00:11:09.181 "raid_level": "raid1", 00:11:09.181 "superblock": false, 00:11:09.181 "num_base_bdevs": 2, 00:11:09.181 "num_base_bdevs_discovered": 2, 00:11:09.181 "num_base_bdevs_operational": 2, 00:11:09.181 "base_bdevs_list": [ 00:11:09.181 { 00:11:09.181 "name": "spare", 00:11:09.181 "uuid": "14cd5896-4387-59d6-8720-da692bbf7245", 00:11:09.181 "is_configured": true, 00:11:09.181 "data_offset": 0, 00:11:09.181 "data_size": 65536 00:11:09.181 }, 00:11:09.181 { 00:11:09.181 "name": "BaseBdev2", 00:11:09.181 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:09.181 "is_configured": true, 00:11:09.181 "data_offset": 0, 00:11:09.181 "data_size": 65536 00:11:09.181 } 00:11:09.181 ] 00:11:09.181 }' 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.181 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.181 "name": "raid_bdev1", 00:11:09.181 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:09.181 "strip_size_kb": 0, 00:11:09.181 "state": "online", 00:11:09.181 "raid_level": "raid1", 00:11:09.181 "superblock": false, 00:11:09.181 "num_base_bdevs": 2, 00:11:09.181 "num_base_bdevs_discovered": 2, 00:11:09.181 "num_base_bdevs_operational": 2, 00:11:09.181 "base_bdevs_list": [ 00:11:09.181 { 00:11:09.181 "name": "spare", 00:11:09.181 "uuid": "14cd5896-4387-59d6-8720-da692bbf7245", 00:11:09.181 "is_configured": true, 00:11:09.181 "data_offset": 0, 00:11:09.181 "data_size": 65536 00:11:09.181 }, 00:11:09.181 { 00:11:09.181 "name": "BaseBdev2", 00:11:09.181 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:09.181 "is_configured": true, 00:11:09.181 "data_offset": 0, 00:11:09.182 "data_size": 65536 00:11:09.182 } 00:11:09.182 ] 00:11:09.182 }' 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.182 "name": "raid_bdev1", 00:11:09.182 "uuid": "43c01b77-2215-4ae5-ba37-38234f92b898", 00:11:09.182 "strip_size_kb": 0, 00:11:09.182 "state": "online", 00:11:09.182 "raid_level": "raid1", 00:11:09.182 "superblock": false, 00:11:09.182 "num_base_bdevs": 2, 00:11:09.182 "num_base_bdevs_discovered": 2, 00:11:09.182 "num_base_bdevs_operational": 2, 00:11:09.182 "base_bdevs_list": [ 00:11:09.182 { 00:11:09.182 "name": "spare", 00:11:09.182 "uuid": "14cd5896-4387-59d6-8720-da692bbf7245", 00:11:09.182 "is_configured": true, 00:11:09.182 "data_offset": 0, 00:11:09.182 "data_size": 65536 00:11:09.182 }, 00:11:09.182 { 00:11:09.182 "name": "BaseBdev2", 00:11:09.182 "uuid": "515bebd3-c4ec-5e22-bc7e-e01393751177", 00:11:09.182 "is_configured": true, 00:11:09.182 "data_offset": 0, 00:11:09.182 "data_size": 65536 00:11:09.182 } 00:11:09.182 ] 00:11:09.182 }' 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.182 15:27:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.750 [2024-11-26 15:27:08.031124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.750 [2024-11-26 15:27:08.031175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.750 [2024-11-26 15:27:08.031315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.750 [2024-11-26 15:27:08.031411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.750 [2024-11-26 15:27:08.031427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:09.750 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:10.009 /dev/nbd0 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.009 1+0 records in 00:11:10.009 1+0 records out 00:11:10.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432034 s, 9.5 MB/s 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:10.009 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.010 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:10.010 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:10.010 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:10.010 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:10.010 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:10.269 /dev/nbd1 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.269 1+0 records in 00:11:10.269 1+0 records out 00:11:10.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391717 s, 10.5 MB/s 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.269 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.529 15:27:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87557 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 87557 ']' 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 87557 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87557 00:11:10.789 killing process with pid 87557 00:11:10.789 Received shutdown signal, test time was about 60.000000 seconds 00:11:10.789 00:11:10.789 Latency(us) 00:11:10.789 [2024-11-26T15:27:09.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.789 [2024-11-26T15:27:09.268Z] =================================================================================================================== 00:11:10.789 [2024-11-26T15:27:09.268Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87557' 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 87557 00:11:10.789 [2024-11-26 15:27:09.139476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.789 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 87557 00:11:10.789 [2024-11-26 15:27:09.197492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.048 15:27:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:11.048 00:11:11.048 real 0m13.652s 00:11:11.048 user 0m15.680s 00:11:11.048 sys 0m2.736s 00:11:11.048 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.048 15:27:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.048 ************************************ 00:11:11.048 END TEST raid_rebuild_test 00:11:11.048 ************************************ 00:11:11.308 15:27:09 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:11.308 15:27:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:11.308 15:27:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.308 15:27:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.308 ************************************ 00:11:11.308 START TEST raid_rebuild_test_sb 00:11:11.308 ************************************ 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=87963 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 87963 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 87963 ']' 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.308 15:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.308 [2024-11-26 15:27:09.688869] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:11:11.308 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:11.308 Zero copy mechanism will not be used. 00:11:11.308 [2024-11-26 15:27:09.689356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87963 ] 00:11:11.584 [2024-11-26 15:27:09.824553] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:11.584 [2024-11-26 15:27:09.862171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.584 [2024-11-26 15:27:09.906174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.584 [2024-11-26 15:27:09.984043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.584 [2024-11-26 15:27:09.984092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.164 BaseBdev1_malloc 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.164 [2024-11-26 15:27:10.541253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:12.164 [2024-11-26 15:27:10.541341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.164 [2024-11-26 15:27:10.541375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:12.164 [2024-11-26 15:27:10.541400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.164 [2024-11-26 15:27:10.543847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.164 [2024-11-26 15:27:10.543887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.164 BaseBdev1 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.164 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.164 BaseBdev2_malloc 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 [2024-11-26 15:27:10.572133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:12.165 [2024-11-26 15:27:10.572228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.165 [2024-11-26 15:27:10.572253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:12.165 [2024-11-26 15:27:10.572266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.165 [2024-11-26 15:27:10.574696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.165 [2024-11-26 15:27:10.574741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.165 BaseBdev2 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 spare_malloc 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 spare_delay 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 [2024-11-26 15:27:10.627352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:12.165 [2024-11-26 15:27:10.627443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.165 [2024-11-26 15:27:10.627472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:12.165 [2024-11-26 15:27:10.627487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.165 [2024-11-26 15:27:10.630141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.165 [2024-11-26 15:27:10.630191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:12.165 spare 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.165 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.423 [2024-11-26 15:27:10.639398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.423 [2024-11-26 15:27:10.641571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.423 [2024-11-26 15:27:10.641739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:12.423 [2024-11-26 15:27:10.641760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.423 [2024-11-26 15:27:10.642069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:12.423 [2024-11-26 15:27:10.642261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:12.423 [2024-11-26 15:27:10.642278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:12.423 [2024-11-26 15:27:10.642436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.423 "name": "raid_bdev1", 00:11:12.423 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:12.423 "strip_size_kb": 0, 00:11:12.423 "state": "online", 00:11:12.423 "raid_level": "raid1", 00:11:12.423 "superblock": true, 00:11:12.423 "num_base_bdevs": 2, 00:11:12.423 "num_base_bdevs_discovered": 2, 00:11:12.423 "num_base_bdevs_operational": 2, 00:11:12.423 "base_bdevs_list": [ 00:11:12.423 { 00:11:12.423 "name": "BaseBdev1", 00:11:12.423 "uuid": "9b1e961e-80d1-5276-8d50-fd6abaa9abd1", 00:11:12.423 "is_configured": true, 00:11:12.423 "data_offset": 2048, 00:11:12.423 "data_size": 63488 00:11:12.423 }, 00:11:12.423 { 00:11:12.423 "name": "BaseBdev2", 00:11:12.423 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:12.423 "is_configured": true, 00:11:12.423 "data_offset": 2048, 00:11:12.423 "data_size": 63488 00:11:12.423 } 00:11:12.423 ] 00:11:12.423 }' 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.423 15:27:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.683 [2024-11-26 15:27:11.099876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:12.683 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.942 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:12.942 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:12.943 [2024-11-26 15:27:11.375719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.943 /dev/nbd0 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.943 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.203 1+0 records in 00:11:13.203 1+0 records out 00:11:13.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483213 s, 8.5 MB/s 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:13.203 15:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:17.400 63488+0 records in 00:11:17.400 63488+0 records out 00:11:17.400 32505856 bytes (33 MB, 31 MiB) copied, 3.95923 s, 8.2 MB/s 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:17.400 [2024-11-26 15:27:15.614424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:17.400 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.401 [2024-11-26 15:27:15.630646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.401 "name": "raid_bdev1", 00:11:17.401 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:17.401 "strip_size_kb": 0, 00:11:17.401 "state": "online", 00:11:17.401 "raid_level": "raid1", 00:11:17.401 "superblock": true, 00:11:17.401 "num_base_bdevs": 2, 00:11:17.401 "num_base_bdevs_discovered": 1, 00:11:17.401 "num_base_bdevs_operational": 1, 00:11:17.401 "base_bdevs_list": [ 00:11:17.401 { 00:11:17.401 "name": null, 00:11:17.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.401 "is_configured": false, 00:11:17.401 "data_offset": 0, 00:11:17.401 "data_size": 63488 00:11:17.401 }, 00:11:17.401 { 00:11:17.401 "name": "BaseBdev2", 00:11:17.401 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:17.401 "is_configured": true, 00:11:17.401 "data_offset": 2048, 00:11:17.401 "data_size": 63488 00:11:17.401 } 00:11:17.401 ] 00:11:17.401 }' 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.401 15:27:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.660 15:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:17.660 15:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.660 15:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.660 [2024-11-26 15:27:16.090833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:17.660 [2024-11-26 15:27:16.106242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3770 00:11:17.660 15:27:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.660 15:27:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:17.660 [2024-11-26 15:27:16.108982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.038 "name": "raid_bdev1", 00:11:19.038 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:19.038 "strip_size_kb": 0, 00:11:19.038 "state": "online", 00:11:19.038 "raid_level": "raid1", 00:11:19.038 "superblock": true, 00:11:19.038 "num_base_bdevs": 2, 00:11:19.038 "num_base_bdevs_discovered": 2, 00:11:19.038 "num_base_bdevs_operational": 2, 00:11:19.038 "process": { 00:11:19.038 "type": "rebuild", 00:11:19.038 "target": "spare", 00:11:19.038 "progress": { 00:11:19.038 "blocks": 20480, 00:11:19.038 "percent": 32 00:11:19.038 } 00:11:19.038 }, 00:11:19.038 "base_bdevs_list": [ 00:11:19.038 { 00:11:19.038 "name": "spare", 00:11:19.038 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:19.038 "is_configured": true, 00:11:19.038 "data_offset": 2048, 00:11:19.038 "data_size": 63488 00:11:19.038 }, 00:11:19.038 { 00:11:19.038 "name": "BaseBdev2", 00:11:19.038 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:19.038 "is_configured": true, 00:11:19.038 "data_offset": 2048, 00:11:19.038 "data_size": 63488 00:11:19.038 } 00:11:19.038 ] 00:11:19.038 }' 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.038 [2024-11-26 15:27:17.274353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:19.038 [2024-11-26 15:27:17.319919] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:19.038 [2024-11-26 15:27:17.320096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.038 [2024-11-26 15:27:17.320116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:19.038 [2024-11-26 15:27:17.320138] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.038 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.039 "name": "raid_bdev1", 00:11:19.039 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:19.039 "strip_size_kb": 0, 00:11:19.039 "state": "online", 00:11:19.039 "raid_level": "raid1", 00:11:19.039 "superblock": true, 00:11:19.039 "num_base_bdevs": 2, 00:11:19.039 "num_base_bdevs_discovered": 1, 00:11:19.039 "num_base_bdevs_operational": 1, 00:11:19.039 "base_bdevs_list": [ 00:11:19.039 { 00:11:19.039 "name": null, 00:11:19.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.039 "is_configured": false, 00:11:19.039 "data_offset": 0, 00:11:19.039 "data_size": 63488 00:11:19.039 }, 00:11:19.039 { 00:11:19.039 "name": "BaseBdev2", 00:11:19.039 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:19.039 "is_configured": true, 00:11:19.039 "data_offset": 2048, 00:11:19.039 "data_size": 63488 00:11:19.039 } 00:11:19.039 ] 00:11:19.039 }' 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.039 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.608 "name": "raid_bdev1", 00:11:19.608 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:19.608 "strip_size_kb": 0, 00:11:19.608 "state": "online", 00:11:19.608 "raid_level": "raid1", 00:11:19.608 "superblock": true, 00:11:19.608 "num_base_bdevs": 2, 00:11:19.608 "num_base_bdevs_discovered": 1, 00:11:19.608 "num_base_bdevs_operational": 1, 00:11:19.608 "base_bdevs_list": [ 00:11:19.608 { 00:11:19.608 "name": null, 00:11:19.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.608 "is_configured": false, 00:11:19.608 "data_offset": 0, 00:11:19.608 "data_size": 63488 00:11:19.608 }, 00:11:19.608 { 00:11:19.608 "name": "BaseBdev2", 00:11:19.608 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:19.608 "is_configured": true, 00:11:19.608 "data_offset": 2048, 00:11:19.608 "data_size": 63488 00:11:19.608 } 00:11:19.608 ] 00:11:19.608 }' 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 [2024-11-26 15:27:17.901146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:19.608 [2024-11-26 15:27:17.909977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3840 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.608 15:27:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:19.608 [2024-11-26 15:27:17.912231] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.551 "name": "raid_bdev1", 00:11:20.551 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:20.551 "strip_size_kb": 0, 00:11:20.551 "state": "online", 00:11:20.551 "raid_level": "raid1", 00:11:20.551 "superblock": true, 00:11:20.551 "num_base_bdevs": 2, 00:11:20.551 "num_base_bdevs_discovered": 2, 00:11:20.551 "num_base_bdevs_operational": 2, 00:11:20.551 "process": { 00:11:20.551 "type": "rebuild", 00:11:20.551 "target": "spare", 00:11:20.551 "progress": { 00:11:20.551 "blocks": 20480, 00:11:20.551 "percent": 32 00:11:20.551 } 00:11:20.551 }, 00:11:20.551 "base_bdevs_list": [ 00:11:20.551 { 00:11:20.551 "name": "spare", 00:11:20.551 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:20.551 "is_configured": true, 00:11:20.551 "data_offset": 2048, 00:11:20.551 "data_size": 63488 00:11:20.551 }, 00:11:20.551 { 00:11:20.551 "name": "BaseBdev2", 00:11:20.551 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:20.551 "is_configured": true, 00:11:20.551 "data_offset": 2048, 00:11:20.551 "data_size": 63488 00:11:20.551 } 00:11:20.551 ] 00:11:20.551 }' 00:11:20.551 15:27:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.551 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.551 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:20.811 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=298 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.811 "name": "raid_bdev1", 00:11:20.811 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:20.811 "strip_size_kb": 0, 00:11:20.811 "state": "online", 00:11:20.811 "raid_level": "raid1", 00:11:20.811 "superblock": true, 00:11:20.811 "num_base_bdevs": 2, 00:11:20.811 "num_base_bdevs_discovered": 2, 00:11:20.811 "num_base_bdevs_operational": 2, 00:11:20.811 "process": { 00:11:20.811 "type": "rebuild", 00:11:20.811 "target": "spare", 00:11:20.811 "progress": { 00:11:20.811 "blocks": 22528, 00:11:20.811 "percent": 35 00:11:20.811 } 00:11:20.811 }, 00:11:20.811 "base_bdevs_list": [ 00:11:20.811 { 00:11:20.811 "name": "spare", 00:11:20.811 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:20.811 "is_configured": true, 00:11:20.811 "data_offset": 2048, 00:11:20.811 "data_size": 63488 00:11:20.811 }, 00:11:20.811 { 00:11:20.811 "name": "BaseBdev2", 00:11:20.811 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:20.811 "is_configured": true, 00:11:20.811 "data_offset": 2048, 00:11:20.811 "data_size": 63488 00:11:20.811 } 00:11:20.811 ] 00:11:20.811 }' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.811 15:27:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.750 15:27:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.009 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:22.009 "name": "raid_bdev1", 00:11:22.009 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:22.009 "strip_size_kb": 0, 00:11:22.009 "state": "online", 00:11:22.009 "raid_level": "raid1", 00:11:22.009 "superblock": true, 00:11:22.010 "num_base_bdevs": 2, 00:11:22.010 "num_base_bdevs_discovered": 2, 00:11:22.010 "num_base_bdevs_operational": 2, 00:11:22.010 "process": { 00:11:22.010 "type": "rebuild", 00:11:22.010 "target": "spare", 00:11:22.010 "progress": { 00:11:22.010 "blocks": 45056, 00:11:22.010 "percent": 70 00:11:22.010 } 00:11:22.010 }, 00:11:22.010 "base_bdevs_list": [ 00:11:22.010 { 00:11:22.010 "name": "spare", 00:11:22.010 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:22.010 "is_configured": true, 00:11:22.010 "data_offset": 2048, 00:11:22.010 "data_size": 63488 00:11:22.010 }, 00:11:22.010 { 00:11:22.010 "name": "BaseBdev2", 00:11:22.010 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:22.010 "is_configured": true, 00:11:22.010 "data_offset": 2048, 00:11:22.010 "data_size": 63488 00:11:22.010 } 00:11:22.010 ] 00:11:22.010 }' 00:11:22.010 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:22.010 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:22.010 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.010 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.010 15:27:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:22.579 [2024-11-26 15:27:21.040325] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:22.579 [2024-11-26 15:27:21.040521] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:22.579 [2024-11-26 15:27:21.040701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.147 "name": "raid_bdev1", 00:11:23.147 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:23.147 "strip_size_kb": 0, 00:11:23.147 "state": "online", 00:11:23.147 "raid_level": "raid1", 00:11:23.147 "superblock": true, 00:11:23.147 "num_base_bdevs": 2, 00:11:23.147 "num_base_bdevs_discovered": 2, 00:11:23.147 "num_base_bdevs_operational": 2, 00:11:23.147 "base_bdevs_list": [ 00:11:23.147 { 00:11:23.147 "name": "spare", 00:11:23.147 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:23.147 "is_configured": true, 00:11:23.147 "data_offset": 2048, 00:11:23.147 "data_size": 63488 00:11:23.147 }, 00:11:23.147 { 00:11:23.147 "name": "BaseBdev2", 00:11:23.147 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:23.147 "is_configured": true, 00:11:23.147 "data_offset": 2048, 00:11:23.147 "data_size": 63488 00:11:23.147 } 00:11:23.147 ] 00:11:23.147 }' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.147 "name": "raid_bdev1", 00:11:23.147 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:23.147 "strip_size_kb": 0, 00:11:23.147 "state": "online", 00:11:23.147 "raid_level": "raid1", 00:11:23.147 "superblock": true, 00:11:23.147 "num_base_bdevs": 2, 00:11:23.147 "num_base_bdevs_discovered": 2, 00:11:23.147 "num_base_bdevs_operational": 2, 00:11:23.147 "base_bdevs_list": [ 00:11:23.147 { 00:11:23.147 "name": "spare", 00:11:23.147 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:23.147 "is_configured": true, 00:11:23.147 "data_offset": 2048, 00:11:23.147 "data_size": 63488 00:11:23.147 }, 00:11:23.147 { 00:11:23.147 "name": "BaseBdev2", 00:11:23.147 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:23.147 "is_configured": true, 00:11:23.147 "data_offset": 2048, 00:11:23.147 "data_size": 63488 00:11:23.147 } 00:11:23.147 ] 00:11:23.147 }' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.147 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.148 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.148 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.148 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.148 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.148 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.407 "name": "raid_bdev1", 00:11:23.407 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:23.407 "strip_size_kb": 0, 00:11:23.407 "state": "online", 00:11:23.407 "raid_level": "raid1", 00:11:23.407 "superblock": true, 00:11:23.407 "num_base_bdevs": 2, 00:11:23.407 "num_base_bdevs_discovered": 2, 00:11:23.407 "num_base_bdevs_operational": 2, 00:11:23.407 "base_bdevs_list": [ 00:11:23.407 { 00:11:23.407 "name": "spare", 00:11:23.407 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:23.407 "is_configured": true, 00:11:23.407 "data_offset": 2048, 00:11:23.407 "data_size": 63488 00:11:23.407 }, 00:11:23.407 { 00:11:23.407 "name": "BaseBdev2", 00:11:23.407 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:23.407 "is_configured": true, 00:11:23.407 "data_offset": 2048, 00:11:23.407 "data_size": 63488 00:11:23.407 } 00:11:23.407 ] 00:11:23.407 }' 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.407 15:27:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.667 [2024-11-26 15:27:22.057548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:23.667 [2024-11-26 15:27:22.057677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.667 [2024-11-26 15:27:22.057840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.667 [2024-11-26 15:27:22.057962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.667 [2024-11-26 15:27:22.058016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:23.667 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:23.927 /dev/nbd0 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:23.927 1+0 records in 00:11:23.927 1+0 records out 00:11:23.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275228 s, 14.9 MB/s 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:23.927 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:24.186 /dev/nbd1 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.186 1+0 records in 00:11:24.186 1+0 records out 00:11:24.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355097 s, 11.5 MB/s 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:24.186 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.444 15:27:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.703 [2024-11-26 15:27:23.146771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:24.703 [2024-11-26 15:27:23.146906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.703 [2024-11-26 15:27:23.146953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.703 [2024-11-26 15:27:23.146980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.703 [2024-11-26 15:27:23.149506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.703 [2024-11-26 15:27:23.149585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:24.703 [2024-11-26 15:27:23.149701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:24.703 [2024-11-26 15:27:23.149788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.703 [2024-11-26 15:27:23.149943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.703 spare 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.703 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.962 [2024-11-26 15:27:23.250023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:24.962 [2024-11-26 15:27:23.250116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.962 [2024-11-26 15:27:23.250499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:11:24.962 [2024-11-26 15:27:23.250718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:24.962 [2024-11-26 15:27:23.250763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:24.962 [2024-11-26 15:27:23.250966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.962 "name": "raid_bdev1", 00:11:24.962 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:24.962 "strip_size_kb": 0, 00:11:24.962 "state": "online", 00:11:24.962 "raid_level": "raid1", 00:11:24.962 "superblock": true, 00:11:24.962 "num_base_bdevs": 2, 00:11:24.962 "num_base_bdevs_discovered": 2, 00:11:24.962 "num_base_bdevs_operational": 2, 00:11:24.962 "base_bdevs_list": [ 00:11:24.962 { 00:11:24.962 "name": "spare", 00:11:24.962 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:24.962 "is_configured": true, 00:11:24.962 "data_offset": 2048, 00:11:24.962 "data_size": 63488 00:11:24.962 }, 00:11:24.962 { 00:11:24.962 "name": "BaseBdev2", 00:11:24.962 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:24.962 "is_configured": true, 00:11:24.962 "data_offset": 2048, 00:11:24.962 "data_size": 63488 00:11:24.962 } 00:11:24.962 ] 00:11:24.962 }' 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.962 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.531 "name": "raid_bdev1", 00:11:25.531 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:25.531 "strip_size_kb": 0, 00:11:25.531 "state": "online", 00:11:25.531 "raid_level": "raid1", 00:11:25.531 "superblock": true, 00:11:25.531 "num_base_bdevs": 2, 00:11:25.531 "num_base_bdevs_discovered": 2, 00:11:25.531 "num_base_bdevs_operational": 2, 00:11:25.531 "base_bdevs_list": [ 00:11:25.531 { 00:11:25.531 "name": "spare", 00:11:25.531 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:25.531 "is_configured": true, 00:11:25.531 "data_offset": 2048, 00:11:25.531 "data_size": 63488 00:11:25.531 }, 00:11:25.531 { 00:11:25.531 "name": "BaseBdev2", 00:11:25.531 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:25.531 "is_configured": true, 00:11:25.531 "data_offset": 2048, 00:11:25.531 "data_size": 63488 00:11:25.531 } 00:11:25.531 ] 00:11:25.531 }' 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.531 [2024-11-26 15:27:23.871107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.531 "name": "raid_bdev1", 00:11:25.531 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:25.531 "strip_size_kb": 0, 00:11:25.531 "state": "online", 00:11:25.531 "raid_level": "raid1", 00:11:25.531 "superblock": true, 00:11:25.531 "num_base_bdevs": 2, 00:11:25.531 "num_base_bdevs_discovered": 1, 00:11:25.531 "num_base_bdevs_operational": 1, 00:11:25.531 "base_bdevs_list": [ 00:11:25.531 { 00:11:25.531 "name": null, 00:11:25.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.531 "is_configured": false, 00:11:25.531 "data_offset": 0, 00:11:25.531 "data_size": 63488 00:11:25.531 }, 00:11:25.531 { 00:11:25.531 "name": "BaseBdev2", 00:11:25.531 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:25.531 "is_configured": true, 00:11:25.531 "data_offset": 2048, 00:11:25.531 "data_size": 63488 00:11:25.531 } 00:11:25.531 ] 00:11:25.531 }' 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.531 15:27:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 15:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:26.102 15:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.102 15:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.102 [2024-11-26 15:27:24.291344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.102 [2024-11-26 15:27:24.291632] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:26.102 [2024-11-26 15:27:24.291713] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:26.102 [2024-11-26 15:27:24.291789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.102 [2024-11-26 15:27:24.300399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1fc0 00:11:26.102 15:27:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.102 15:27:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:26.102 [2024-11-26 15:27:24.302706] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.042 "name": "raid_bdev1", 00:11:27.042 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:27.042 "strip_size_kb": 0, 00:11:27.042 "state": "online", 00:11:27.042 "raid_level": "raid1", 00:11:27.042 "superblock": true, 00:11:27.042 "num_base_bdevs": 2, 00:11:27.042 "num_base_bdevs_discovered": 2, 00:11:27.042 "num_base_bdevs_operational": 2, 00:11:27.042 "process": { 00:11:27.042 "type": "rebuild", 00:11:27.042 "target": "spare", 00:11:27.042 "progress": { 00:11:27.042 "blocks": 20480, 00:11:27.042 "percent": 32 00:11:27.042 } 00:11:27.042 }, 00:11:27.042 "base_bdevs_list": [ 00:11:27.042 { 00:11:27.042 "name": "spare", 00:11:27.042 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:27.042 "is_configured": true, 00:11:27.042 "data_offset": 2048, 00:11:27.042 "data_size": 63488 00:11:27.042 }, 00:11:27.042 { 00:11:27.042 "name": "BaseBdev2", 00:11:27.042 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:27.042 "is_configured": true, 00:11:27.042 "data_offset": 2048, 00:11:27.042 "data_size": 63488 00:11:27.042 } 00:11:27.042 ] 00:11:27.042 }' 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.042 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.042 [2024-11-26 15:27:25.469372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:27.042 [2024-11-26 15:27:25.513301] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:27.042 [2024-11-26 15:27:25.513368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.042 [2024-11-26 15:27:25.513383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:27.042 [2024-11-26 15:27:25.513393] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.302 "name": "raid_bdev1", 00:11:27.302 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:27.302 "strip_size_kb": 0, 00:11:27.302 "state": "online", 00:11:27.302 "raid_level": "raid1", 00:11:27.302 "superblock": true, 00:11:27.302 "num_base_bdevs": 2, 00:11:27.302 "num_base_bdevs_discovered": 1, 00:11:27.302 "num_base_bdevs_operational": 1, 00:11:27.302 "base_bdevs_list": [ 00:11:27.302 { 00:11:27.302 "name": null, 00:11:27.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.302 "is_configured": false, 00:11:27.302 "data_offset": 0, 00:11:27.302 "data_size": 63488 00:11:27.302 }, 00:11:27.302 { 00:11:27.302 "name": "BaseBdev2", 00:11:27.302 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:27.302 "is_configured": true, 00:11:27.302 "data_offset": 2048, 00:11:27.302 "data_size": 63488 00:11:27.302 } 00:11:27.302 ] 00:11:27.302 }' 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.302 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.568 15:27:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:27.568 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.568 15:27:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.568 [2024-11-26 15:27:26.001849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:27.568 [2024-11-26 15:27:26.002038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.568 [2024-11-26 15:27:26.002083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:27.568 [2024-11-26 15:27:26.002119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.568 [2024-11-26 15:27:26.002723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.568 [2024-11-26 15:27:26.002801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:27.568 [2024-11-26 15:27:26.002945] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:27.568 [2024-11-26 15:27:26.003002] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:27.568 [2024-11-26 15:27:26.003047] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:27.568 [2024-11-26 15:27:26.003095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:27.568 [2024-11-26 15:27:26.011834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:11:27.568 spare 00:11:27.568 15:27:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.568 15:27:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:27.568 [2024-11-26 15:27:26.014223] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.957 "name": "raid_bdev1", 00:11:28.957 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:28.957 "strip_size_kb": 0, 00:11:28.957 "state": "online", 00:11:28.957 "raid_level": "raid1", 00:11:28.957 "superblock": true, 00:11:28.957 "num_base_bdevs": 2, 00:11:28.957 "num_base_bdevs_discovered": 2, 00:11:28.957 "num_base_bdevs_operational": 2, 00:11:28.957 "process": { 00:11:28.957 "type": "rebuild", 00:11:28.957 "target": "spare", 00:11:28.957 "progress": { 00:11:28.957 "blocks": 20480, 00:11:28.957 "percent": 32 00:11:28.957 } 00:11:28.957 }, 00:11:28.957 "base_bdevs_list": [ 00:11:28.957 { 00:11:28.957 "name": "spare", 00:11:28.957 "uuid": "cbab3d1b-1006-5703-b3c9-54c88dbf9223", 00:11:28.957 "is_configured": true, 00:11:28.957 "data_offset": 2048, 00:11:28.957 "data_size": 63488 00:11:28.957 }, 00:11:28.957 { 00:11:28.957 "name": "BaseBdev2", 00:11:28.957 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:28.957 "is_configured": true, 00:11:28.957 "data_offset": 2048, 00:11:28.957 "data_size": 63488 00:11:28.957 } 00:11:28.957 ] 00:11:28.957 }' 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.957 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.958 [2024-11-26 15:27:27.176908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.958 [2024-11-26 15:27:27.224823] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:28.958 [2024-11-26 15:27:27.224973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.958 [2024-11-26 15:27:27.225015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:28.958 [2024-11-26 15:27:27.225037] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.958 "name": "raid_bdev1", 00:11:28.958 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:28.958 "strip_size_kb": 0, 00:11:28.958 "state": "online", 00:11:28.958 "raid_level": "raid1", 00:11:28.958 "superblock": true, 00:11:28.958 "num_base_bdevs": 2, 00:11:28.958 "num_base_bdevs_discovered": 1, 00:11:28.958 "num_base_bdevs_operational": 1, 00:11:28.958 "base_bdevs_list": [ 00:11:28.958 { 00:11:28.958 "name": null, 00:11:28.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.958 "is_configured": false, 00:11:28.958 "data_offset": 0, 00:11:28.958 "data_size": 63488 00:11:28.958 }, 00:11:28.958 { 00:11:28.958 "name": "BaseBdev2", 00:11:28.958 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:28.958 "is_configured": true, 00:11:28.958 "data_offset": 2048, 00:11:28.958 "data_size": 63488 00:11:28.958 } 00:11:28.958 ] 00:11:28.958 }' 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.958 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.217 "name": "raid_bdev1", 00:11:29.217 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:29.217 "strip_size_kb": 0, 00:11:29.217 "state": "online", 00:11:29.217 "raid_level": "raid1", 00:11:29.217 "superblock": true, 00:11:29.217 "num_base_bdevs": 2, 00:11:29.217 "num_base_bdevs_discovered": 1, 00:11:29.217 "num_base_bdevs_operational": 1, 00:11:29.217 "base_bdevs_list": [ 00:11:29.217 { 00:11:29.217 "name": null, 00:11:29.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.217 "is_configured": false, 00:11:29.217 "data_offset": 0, 00:11:29.217 "data_size": 63488 00:11:29.217 }, 00:11:29.217 { 00:11:29.217 "name": "BaseBdev2", 00:11:29.217 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:29.217 "is_configured": true, 00:11:29.217 "data_offset": 2048, 00:11:29.217 "data_size": 63488 00:11:29.217 } 00:11:29.217 ] 00:11:29.217 }' 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:29.217 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.477 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.477 [2024-11-26 15:27:27.753590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:29.477 [2024-11-26 15:27:27.753668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.477 [2024-11-26 15:27:27.753699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:29.477 [2024-11-26 15:27:27.753709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.477 [2024-11-26 15:27:27.754224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.477 [2024-11-26 15:27:27.754253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:29.477 [2024-11-26 15:27:27.754350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:29.477 [2024-11-26 15:27:27.754368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:29.477 [2024-11-26 15:27:27.754380] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:29.478 [2024-11-26 15:27:27.754393] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:29.478 BaseBdev1 00:11:29.478 15:27:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.478 15:27:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.416 "name": "raid_bdev1", 00:11:30.416 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:30.416 "strip_size_kb": 0, 00:11:30.416 "state": "online", 00:11:30.416 "raid_level": "raid1", 00:11:30.416 "superblock": true, 00:11:30.416 "num_base_bdevs": 2, 00:11:30.416 "num_base_bdevs_discovered": 1, 00:11:30.416 "num_base_bdevs_operational": 1, 00:11:30.416 "base_bdevs_list": [ 00:11:30.416 { 00:11:30.416 "name": null, 00:11:30.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.416 "is_configured": false, 00:11:30.416 "data_offset": 0, 00:11:30.416 "data_size": 63488 00:11:30.416 }, 00:11:30.416 { 00:11:30.416 "name": "BaseBdev2", 00:11:30.416 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:30.416 "is_configured": true, 00:11:30.416 "data_offset": 2048, 00:11:30.416 "data_size": 63488 00:11:30.416 } 00:11:30.416 ] 00:11:30.416 }' 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.416 15:27:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.676 "name": "raid_bdev1", 00:11:30.676 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:30.676 "strip_size_kb": 0, 00:11:30.676 "state": "online", 00:11:30.676 "raid_level": "raid1", 00:11:30.676 "superblock": true, 00:11:30.676 "num_base_bdevs": 2, 00:11:30.676 "num_base_bdevs_discovered": 1, 00:11:30.676 "num_base_bdevs_operational": 1, 00:11:30.676 "base_bdevs_list": [ 00:11:30.676 { 00:11:30.676 "name": null, 00:11:30.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.676 "is_configured": false, 00:11:30.676 "data_offset": 0, 00:11:30.676 "data_size": 63488 00:11:30.676 }, 00:11:30.676 { 00:11:30.676 "name": "BaseBdev2", 00:11:30.676 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:30.676 "is_configured": true, 00:11:30.676 "data_offset": 2048, 00:11:30.676 "data_size": 63488 00:11:30.676 } 00:11:30.676 ] 00:11:30.676 }' 00:11:30.676 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.934 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.935 [2024-11-26 15:27:29.234064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.935 [2024-11-26 15:27:29.234377] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:30.935 [2024-11-26 15:27:29.234441] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:30.935 request: 00:11:30.935 { 00:11:30.935 "base_bdev": "BaseBdev1", 00:11:30.935 "raid_bdev": "raid_bdev1", 00:11:30.935 "method": "bdev_raid_add_base_bdev", 00:11:30.935 "req_id": 1 00:11:30.935 } 00:11:30.935 Got JSON-RPC error response 00:11:30.935 response: 00:11:30.935 { 00:11:30.935 "code": -22, 00:11:30.935 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:30.935 } 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:30.935 15:27:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.872 "name": "raid_bdev1", 00:11:31.872 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:31.872 "strip_size_kb": 0, 00:11:31.872 "state": "online", 00:11:31.872 "raid_level": "raid1", 00:11:31.872 "superblock": true, 00:11:31.872 "num_base_bdevs": 2, 00:11:31.872 "num_base_bdevs_discovered": 1, 00:11:31.872 "num_base_bdevs_operational": 1, 00:11:31.872 "base_bdevs_list": [ 00:11:31.872 { 00:11:31.872 "name": null, 00:11:31.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.872 "is_configured": false, 00:11:31.872 "data_offset": 0, 00:11:31.872 "data_size": 63488 00:11:31.872 }, 00:11:31.872 { 00:11:31.872 "name": "BaseBdev2", 00:11:31.872 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:31.872 "is_configured": true, 00:11:31.872 "data_offset": 2048, 00:11:31.872 "data_size": 63488 00:11:31.872 } 00:11:31.872 ] 00:11:31.872 }' 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.872 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.441 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:32.441 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.441 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:32.441 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.442 "name": "raid_bdev1", 00:11:32.442 "uuid": "69692be7-11bf-42d2-a6cd-c5689066ad66", 00:11:32.442 "strip_size_kb": 0, 00:11:32.442 "state": "online", 00:11:32.442 "raid_level": "raid1", 00:11:32.442 "superblock": true, 00:11:32.442 "num_base_bdevs": 2, 00:11:32.442 "num_base_bdevs_discovered": 1, 00:11:32.442 "num_base_bdevs_operational": 1, 00:11:32.442 "base_bdevs_list": [ 00:11:32.442 { 00:11:32.442 "name": null, 00:11:32.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.442 "is_configured": false, 00:11:32.442 "data_offset": 0, 00:11:32.442 "data_size": 63488 00:11:32.442 }, 00:11:32.442 { 00:11:32.442 "name": "BaseBdev2", 00:11:32.442 "uuid": "8b369675-cf95-5830-9936-3109754ccd4c", 00:11:32.442 "is_configured": true, 00:11:32.442 "data_offset": 2048, 00:11:32.442 "data_size": 63488 00:11:32.442 } 00:11:32.442 ] 00:11:32.442 }' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 87963 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 87963 ']' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 87963 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87963 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.442 killing process with pid 87963 00:11:32.442 Received shutdown signal, test time was about 60.000000 seconds 00:11:32.442 00:11:32.442 Latency(us) 00:11:32.442 [2024-11-26T15:27:30.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.442 [2024-11-26T15:27:30.921Z] =================================================================================================================== 00:11:32.442 [2024-11-26T15:27:30.921Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87963' 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 87963 00:11:32.442 [2024-11-26 15:27:30.830878] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.442 15:27:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 87963 00:11:32.442 [2024-11-26 15:27:30.831043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.442 [2024-11-26 15:27:30.831104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.442 [2024-11-26 15:27:30.831116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:32.442 [2024-11-26 15:27:30.890457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.011 ************************************ 00:11:33.011 END TEST raid_rebuild_test_sb 00:11:33.011 ************************************ 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:33.011 00:11:33.011 real 0m21.614s 00:11:33.011 user 0m26.237s 00:11:33.011 sys 0m3.726s 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.011 15:27:31 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:33.011 15:27:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:33.011 15:27:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.011 15:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.011 ************************************ 00:11:33.011 START TEST raid_rebuild_test_io 00:11:33.011 ************************************ 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88678 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88678 00:11:33.011 15:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 88678 ']' 00:11:33.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.012 15:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.012 15:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.012 15:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.012 15:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.012 15:27:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.012 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:33.012 Zero copy mechanism will not be used. 00:11:33.012 [2024-11-26 15:27:31.387722] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:11:33.012 [2024-11-26 15:27:31.387858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88678 ] 00:11:33.272 [2024-11-26 15:27:31.528233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:33.272 [2024-11-26 15:27:31.567222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.272 [2024-11-26 15:27:31.608203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.272 [2024-11-26 15:27:31.684944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.272 [2024-11-26 15:27:31.684989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 BaseBdev1_malloc 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 [2024-11-26 15:27:32.241202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:33.842 [2024-11-26 15:27:32.241379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.842 [2024-11-26 15:27:32.241435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:33.842 [2024-11-26 15:27:32.241474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.842 [2024-11-26 15:27:32.244031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.842 [2024-11-26 15:27:32.244109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:33.842 BaseBdev1 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 BaseBdev2_malloc 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 [2024-11-26 15:27:32.275945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:33.842 [2024-11-26 15:27:32.276086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.842 [2024-11-26 15:27:32.276124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:33.842 [2024-11-26 15:27:32.276154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.842 [2024-11-26 15:27:32.278594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.842 [2024-11-26 15:27:32.278676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:33.842 BaseBdev2 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 spare_malloc 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.842 spare_delay 00:11:33.842 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.102 [2024-11-26 15:27:32.322506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:34.102 [2024-11-26 15:27:32.322623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.102 [2024-11-26 15:27:32.322647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:34.102 [2024-11-26 15:27:32.322662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.102 [2024-11-26 15:27:32.325103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.102 [2024-11-26 15:27:32.325142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:34.102 spare 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.102 [2024-11-26 15:27:32.334628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.102 [2024-11-26 15:27:32.336948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.102 [2024-11-26 15:27:32.337112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:34.102 [2024-11-26 15:27:32.337157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:34.102 [2024-11-26 15:27:32.337548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:34.102 [2024-11-26 15:27:32.337757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:34.102 [2024-11-26 15:27:32.337804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:34.102 [2024-11-26 15:27:32.338025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.102 "name": "raid_bdev1", 00:11:34.102 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:34.102 "strip_size_kb": 0, 00:11:34.102 "state": "online", 00:11:34.102 "raid_level": "raid1", 00:11:34.102 "superblock": false, 00:11:34.102 "num_base_bdevs": 2, 00:11:34.102 "num_base_bdevs_discovered": 2, 00:11:34.102 "num_base_bdevs_operational": 2, 00:11:34.102 "base_bdevs_list": [ 00:11:34.102 { 00:11:34.102 "name": "BaseBdev1", 00:11:34.102 "uuid": "415d3266-0bc0-5148-8e20-dd59f10be754", 00:11:34.102 "is_configured": true, 00:11:34.102 "data_offset": 0, 00:11:34.102 "data_size": 65536 00:11:34.102 }, 00:11:34.102 { 00:11:34.102 "name": "BaseBdev2", 00:11:34.102 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:34.102 "is_configured": true, 00:11:34.102 "data_offset": 0, 00:11:34.102 "data_size": 65536 00:11:34.102 } 00:11:34.102 ] 00:11:34.102 }' 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.102 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:34.362 [2024-11-26 15:27:32.723120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 [2024-11-26 15:27:32.826779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.362 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.622 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.622 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.622 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.622 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.622 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.622 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.622 "name": "raid_bdev1", 00:11:34.622 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:34.622 "strip_size_kb": 0, 00:11:34.622 "state": "online", 00:11:34.622 "raid_level": "raid1", 00:11:34.622 "superblock": false, 00:11:34.623 "num_base_bdevs": 2, 00:11:34.623 "num_base_bdevs_discovered": 1, 00:11:34.623 "num_base_bdevs_operational": 1, 00:11:34.623 "base_bdevs_list": [ 00:11:34.623 { 00:11:34.623 "name": null, 00:11:34.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.623 "is_configured": false, 00:11:34.623 "data_offset": 0, 00:11:34.623 "data_size": 65536 00:11:34.623 }, 00:11:34.623 { 00:11:34.623 "name": "BaseBdev2", 00:11:34.623 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:34.623 "is_configured": true, 00:11:34.623 "data_offset": 0, 00:11:34.623 "data_size": 65536 00:11:34.623 } 00:11:34.623 ] 00:11:34.623 }' 00:11:34.623 15:27:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.623 15:27:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.623 [2024-11-26 15:27:32.922400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:34.623 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:34.623 Zero copy mechanism will not be used. 00:11:34.623 Running I/O for 60 seconds... 00:11:34.882 15:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:34.882 15:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.882 15:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.882 [2024-11-26 15:27:33.269712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:34.882 15:27:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.882 15:27:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:35.141 [2024-11-26 15:27:33.356811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:35.141 [2024-11-26 15:27:33.359427] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:35.141 [2024-11-26 15:27:33.480198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:35.141 [2024-11-26 15:27:33.612141] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:35.141 [2024-11-26 15:27:33.612577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:35.721 183.00 IOPS, 549.00 MiB/s [2024-11-26T15:27:34.200Z] [2024-11-26 15:27:33.936725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:35.721 [2024-11-26 15:27:34.053865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:35.721 [2024-11-26 15:27:34.054248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.980 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.981 [2024-11-26 15:27:34.375041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:35.981 [2024-11-26 15:27:34.375866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.981 "name": "raid_bdev1", 00:11:35.981 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:35.981 "strip_size_kb": 0, 00:11:35.981 "state": "online", 00:11:35.981 "raid_level": "raid1", 00:11:35.981 "superblock": false, 00:11:35.981 "num_base_bdevs": 2, 00:11:35.981 "num_base_bdevs_discovered": 2, 00:11:35.981 "num_base_bdevs_operational": 2, 00:11:35.981 "process": { 00:11:35.981 "type": "rebuild", 00:11:35.981 "target": "spare", 00:11:35.981 "progress": { 00:11:35.981 "blocks": 12288, 00:11:35.981 "percent": 18 00:11:35.981 } 00:11:35.981 }, 00:11:35.981 "base_bdevs_list": [ 00:11:35.981 { 00:11:35.981 "name": "spare", 00:11:35.981 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:35.981 "is_configured": true, 00:11:35.981 "data_offset": 0, 00:11:35.981 "data_size": 65536 00:11:35.981 }, 00:11:35.981 { 00:11:35.981 "name": "BaseBdev2", 00:11:35.981 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:35.981 "is_configured": true, 00:11:35.981 "data_offset": 0, 00:11:35.981 "data_size": 65536 00:11:35.981 } 00:11:35.981 ] 00:11:35.981 }' 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.981 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.241 [2024-11-26 15:27:34.457868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.241 [2024-11-26 15:27:34.492825] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:36.241 [2024-11-26 15:27:34.495105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.241 [2024-11-26 15:27:34.495199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.241 [2024-11-26 15:27:34.495236] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:36.241 [2024-11-26 15:27:34.526448] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.241 "name": "raid_bdev1", 00:11:36.241 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:36.241 "strip_size_kb": 0, 00:11:36.241 "state": "online", 00:11:36.241 "raid_level": "raid1", 00:11:36.241 "superblock": false, 00:11:36.241 "num_base_bdevs": 2, 00:11:36.241 "num_base_bdevs_discovered": 1, 00:11:36.241 "num_base_bdevs_operational": 1, 00:11:36.241 "base_bdevs_list": [ 00:11:36.241 { 00:11:36.241 "name": null, 00:11:36.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.241 "is_configured": false, 00:11:36.241 "data_offset": 0, 00:11:36.241 "data_size": 65536 00:11:36.241 }, 00:11:36.241 { 00:11:36.241 "name": "BaseBdev2", 00:11:36.241 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:36.241 "is_configured": true, 00:11:36.241 "data_offset": 0, 00:11:36.241 "data_size": 65536 00:11:36.241 } 00:11:36.241 ] 00:11:36.241 }' 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.241 15:27:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.760 170.50 IOPS, 511.50 MiB/s [2024-11-26T15:27:35.239Z] 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.760 "name": "raid_bdev1", 00:11:36.760 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:36.760 "strip_size_kb": 0, 00:11:36.760 "state": "online", 00:11:36.760 "raid_level": "raid1", 00:11:36.760 "superblock": false, 00:11:36.760 "num_base_bdevs": 2, 00:11:36.760 "num_base_bdevs_discovered": 1, 00:11:36.760 "num_base_bdevs_operational": 1, 00:11:36.760 "base_bdevs_list": [ 00:11:36.760 { 00:11:36.760 "name": null, 00:11:36.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.760 "is_configured": false, 00:11:36.760 "data_offset": 0, 00:11:36.760 "data_size": 65536 00:11:36.760 }, 00:11:36.760 { 00:11:36.760 "name": "BaseBdev2", 00:11:36.760 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:36.760 "is_configured": true, 00:11:36.760 "data_offset": 0, 00:11:36.760 "data_size": 65536 00:11:36.760 } 00:11:36.760 ] 00:11:36.760 }' 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.760 [2024-11-26 15:27:35.190549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.760 15:27:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:37.019 [2024-11-26 15:27:35.241842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:11:37.019 [2024-11-26 15:27:35.244103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.019 [2024-11-26 15:27:35.358710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:37.019 [2024-11-26 15:27:35.478284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:37.019 [2024-11-26 15:27:35.478715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:37.586 [2024-11-26 15:27:35.792594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:37.586 [2024-11-26 15:27:35.793459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:37.586 166.00 IOPS, 498.00 MiB/s [2024-11-26T15:27:36.065Z] [2024-11-26 15:27:36.010254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:37.586 [2024-11-26 15:27:36.010819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.845 "name": "raid_bdev1", 00:11:37.845 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:37.845 "strip_size_kb": 0, 00:11:37.845 "state": "online", 00:11:37.845 "raid_level": "raid1", 00:11:37.845 "superblock": false, 00:11:37.845 "num_base_bdevs": 2, 00:11:37.845 "num_base_bdevs_discovered": 2, 00:11:37.845 "num_base_bdevs_operational": 2, 00:11:37.845 "process": { 00:11:37.845 "type": "rebuild", 00:11:37.845 "target": "spare", 00:11:37.845 "progress": { 00:11:37.845 "blocks": 10240, 00:11:37.845 "percent": 15 00:11:37.845 } 00:11:37.845 }, 00:11:37.845 "base_bdevs_list": [ 00:11:37.845 { 00:11:37.845 "name": "spare", 00:11:37.845 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:37.845 "is_configured": true, 00:11:37.845 "data_offset": 0, 00:11:37.845 "data_size": 65536 00:11:37.845 }, 00:11:37.845 { 00:11:37.845 "name": "BaseBdev2", 00:11:37.845 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:37.845 "is_configured": true, 00:11:37.845 "data_offset": 0, 00:11:37.845 "data_size": 65536 00:11:37.845 } 00:11:37.845 ] 00:11:37.845 }' 00:11:37.845 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=315 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.105 [2024-11-26 15:27:36.380735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.105 "name": "raid_bdev1", 00:11:38.105 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:38.105 "strip_size_kb": 0, 00:11:38.105 "state": "online", 00:11:38.105 "raid_level": "raid1", 00:11:38.105 "superblock": false, 00:11:38.105 "num_base_bdevs": 2, 00:11:38.105 "num_base_bdevs_discovered": 2, 00:11:38.105 "num_base_bdevs_operational": 2, 00:11:38.105 "process": { 00:11:38.105 "type": "rebuild", 00:11:38.105 "target": "spare", 00:11:38.105 "progress": { 00:11:38.105 "blocks": 14336, 00:11:38.105 "percent": 21 00:11:38.105 } 00:11:38.105 }, 00:11:38.105 "base_bdevs_list": [ 00:11:38.105 { 00:11:38.105 "name": "spare", 00:11:38.105 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:38.105 "is_configured": true, 00:11:38.105 "data_offset": 0, 00:11:38.105 "data_size": 65536 00:11:38.105 }, 00:11:38.105 { 00:11:38.105 "name": "BaseBdev2", 00:11:38.105 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:38.105 "is_configured": true, 00:11:38.105 "data_offset": 0, 00:11:38.105 "data_size": 65536 00:11:38.105 } 00:11:38.105 ] 00:11:38.105 }' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.105 15:27:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:38.364 [2024-11-26 15:27:36.589877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:38.364 [2024-11-26 15:27:36.590407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:39.192 138.50 IOPS, 415.50 MiB/s [2024-11-26T15:27:37.671Z] 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.192 15:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.193 15:27:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.193 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.193 "name": "raid_bdev1", 00:11:39.193 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:39.193 "strip_size_kb": 0, 00:11:39.193 "state": "online", 00:11:39.193 "raid_level": "raid1", 00:11:39.193 "superblock": false, 00:11:39.193 "num_base_bdevs": 2, 00:11:39.193 "num_base_bdevs_discovered": 2, 00:11:39.193 "num_base_bdevs_operational": 2, 00:11:39.193 "process": { 00:11:39.193 "type": "rebuild", 00:11:39.193 "target": "spare", 00:11:39.193 "progress": { 00:11:39.193 "blocks": 30720, 00:11:39.193 "percent": 46 00:11:39.193 } 00:11:39.193 }, 00:11:39.193 "base_bdevs_list": [ 00:11:39.193 { 00:11:39.193 "name": "spare", 00:11:39.193 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:39.193 "is_configured": true, 00:11:39.193 "data_offset": 0, 00:11:39.193 "data_size": 65536 00:11:39.193 }, 00:11:39.193 { 00:11:39.193 "name": "BaseBdev2", 00:11:39.193 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:39.193 "is_configured": true, 00:11:39.193 "data_offset": 0, 00:11:39.193 "data_size": 65536 00:11:39.193 } 00:11:39.193 ] 00:11:39.193 }' 00:11:39.193 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.193 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.193 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.452 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.452 15:27:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:39.452 [2024-11-26 15:27:37.748357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:39.971 121.80 IOPS, 365.40 MiB/s [2024-11-26T15:27:38.450Z] [2024-11-26 15:27:38.315236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.231 15:27:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.490 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.490 "name": "raid_bdev1", 00:11:40.490 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:40.490 "strip_size_kb": 0, 00:11:40.490 "state": "online", 00:11:40.490 "raid_level": "raid1", 00:11:40.490 "superblock": false, 00:11:40.490 "num_base_bdevs": 2, 00:11:40.490 "num_base_bdevs_discovered": 2, 00:11:40.490 "num_base_bdevs_operational": 2, 00:11:40.490 "process": { 00:11:40.490 "type": "rebuild", 00:11:40.490 "target": "spare", 00:11:40.490 "progress": { 00:11:40.490 "blocks": 49152, 00:11:40.490 "percent": 75 00:11:40.490 } 00:11:40.490 }, 00:11:40.490 "base_bdevs_list": [ 00:11:40.490 { 00:11:40.490 "name": "spare", 00:11:40.490 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:40.490 "is_configured": true, 00:11:40.490 "data_offset": 0, 00:11:40.490 "data_size": 65536 00:11:40.490 }, 00:11:40.490 { 00:11:40.490 "name": "BaseBdev2", 00:11:40.490 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:40.490 "is_configured": true, 00:11:40.490 "data_offset": 0, 00:11:40.490 "data_size": 65536 00:11:40.490 } 00:11:40.490 ] 00:11:40.490 }' 00:11:40.490 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.490 [2024-11-26 15:27:38.768536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:40.490 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.490 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.490 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.490 15:27:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.429 108.00 IOPS, 324.00 MiB/s [2024-11-26T15:27:39.908Z] [2024-11-26 15:27:39.538159] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:41.429 [2024-11-26 15:27:39.644199] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:41.429 [2024-11-26 15:27:39.646841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.429 "name": "raid_bdev1", 00:11:41.429 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:41.429 "strip_size_kb": 0, 00:11:41.429 "state": "online", 00:11:41.429 "raid_level": "raid1", 00:11:41.429 "superblock": false, 00:11:41.429 "num_base_bdevs": 2, 00:11:41.429 "num_base_bdevs_discovered": 2, 00:11:41.429 "num_base_bdevs_operational": 2, 00:11:41.429 "base_bdevs_list": [ 00:11:41.429 { 00:11:41.429 "name": "spare", 00:11:41.429 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:41.429 "is_configured": true, 00:11:41.429 "data_offset": 0, 00:11:41.429 "data_size": 65536 00:11:41.429 }, 00:11:41.429 { 00:11:41.429 "name": "BaseBdev2", 00:11:41.429 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:41.429 "is_configured": true, 00:11:41.429 "data_offset": 0, 00:11:41.429 "data_size": 65536 00:11:41.429 } 00:11:41.429 ] 00:11:41.429 }' 00:11:41.429 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.689 96.43 IOPS, 289.29 MiB/s [2024-11-26T15:27:40.168Z] 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.689 15:27:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.689 "name": "raid_bdev1", 00:11:41.689 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:41.689 "strip_size_kb": 0, 00:11:41.689 "state": "online", 00:11:41.689 "raid_level": "raid1", 00:11:41.689 "superblock": false, 00:11:41.689 "num_base_bdevs": 2, 00:11:41.689 "num_base_bdevs_discovered": 2, 00:11:41.689 "num_base_bdevs_operational": 2, 00:11:41.689 "base_bdevs_list": [ 00:11:41.689 { 00:11:41.689 "name": "spare", 00:11:41.689 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:41.689 "is_configured": true, 00:11:41.689 "data_offset": 0, 00:11:41.689 "data_size": 65536 00:11:41.689 }, 00:11:41.689 { 00:11:41.689 "name": "BaseBdev2", 00:11:41.689 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:41.689 "is_configured": true, 00:11:41.689 "data_offset": 0, 00:11:41.689 "data_size": 65536 00:11:41.689 } 00:11:41.689 ] 00:11:41.689 }' 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.689 "name": "raid_bdev1", 00:11:41.689 "uuid": "64b103df-1c33-4d23-baaa-34f804312f2c", 00:11:41.689 "strip_size_kb": 0, 00:11:41.689 "state": "online", 00:11:41.689 "raid_level": "raid1", 00:11:41.689 "superblock": false, 00:11:41.689 "num_base_bdevs": 2, 00:11:41.689 "num_base_bdevs_discovered": 2, 00:11:41.689 "num_base_bdevs_operational": 2, 00:11:41.689 "base_bdevs_list": [ 00:11:41.689 { 00:11:41.689 "name": "spare", 00:11:41.689 "uuid": "828d9336-e90c-555e-87f5-952caae0f68b", 00:11:41.689 "is_configured": true, 00:11:41.689 "data_offset": 0, 00:11:41.689 "data_size": 65536 00:11:41.689 }, 00:11:41.689 { 00:11:41.689 "name": "BaseBdev2", 00:11:41.689 "uuid": "60fbc494-3799-5775-b2a1-547fdf536360", 00:11:41.689 "is_configured": true, 00:11:41.689 "data_offset": 0, 00:11:41.689 "data_size": 65536 00:11:41.689 } 00:11:41.689 ] 00:11:41.689 }' 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.689 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.257 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.257 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.257 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.257 [2024-11-26 15:27:40.469583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.257 [2024-11-26 15:27:40.469719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.257 00:11:42.257 Latency(us) 00:11:42.257 [2024-11-26T15:27:40.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.257 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:42.257 raid_bdev1 : 7.59 91.55 274.65 0.00 0.00 14087.16 287.39 110131.10 00:11:42.257 [2024-11-26T15:27:40.736Z] =================================================================================================================== 00:11:42.257 [2024-11-26T15:27:40.736Z] Total : 91.55 274.65 0.00 0.00 14087.16 287.39 110131.10 00:11:42.257 { 00:11:42.257 "results": [ 00:11:42.257 { 00:11:42.257 "job": "raid_bdev1", 00:11:42.257 "core_mask": "0x1", 00:11:42.257 "workload": "randrw", 00:11:42.257 "percentage": 50, 00:11:42.257 "status": "finished", 00:11:42.257 "queue_depth": 2, 00:11:42.257 "io_size": 3145728, 00:11:42.257 "runtime": 7.591508, 00:11:42.257 "iops": 91.54966312358493, 00:11:42.257 "mibps": 274.6489893707548, 00:11:42.257 "io_failed": 0, 00:11:42.257 "io_timeout": 0, 00:11:42.257 "avg_latency_us": 14087.159987132754, 00:11:42.258 "min_latency_us": 287.3947528981086, 00:11:42.258 "max_latency_us": 110131.09735901683 00:11:42.258 } 00:11:42.258 ], 00:11:42.258 "core_count": 1 00:11:42.258 } 00:11:42.258 [2024-11-26 15:27:40.521983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.258 [2024-11-26 15:27:40.522041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.258 [2024-11-26 15:27:40.522148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.258 [2024-11-26 15:27:40.522166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.258 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:42.516 /dev/nbd0 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.516 1+0 records in 00:11:42.516 1+0 records out 00:11:42.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469785 s, 8.7 MB/s 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.516 15:27:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:42.774 /dev/nbd1 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.774 1+0 records in 00:11:42.774 1+0 records out 00:11:42.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399103 s, 10.3 MB/s 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.774 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.070 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88678 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 88678 ']' 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 88678 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.329 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88678 00:11:43.329 killing process with pid 88678 00:11:43.329 Received shutdown signal, test time was about 8.735060 seconds 00:11:43.329 00:11:43.329 Latency(us) 00:11:43.329 [2024-11-26T15:27:41.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.330 [2024-11-26T15:27:41.809Z] =================================================================================================================== 00:11:43.330 [2024-11-26T15:27:41.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:43.330 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.330 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.330 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88678' 00:11:43.330 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 88678 00:11:43.330 [2024-11-26 15:27:41.661095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.330 15:27:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 88678 00:11:43.330 [2024-11-26 15:27:41.709691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.589 15:27:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:43.589 00:11:43.589 real 0m10.757s 00:11:43.589 user 0m13.723s 00:11:43.589 sys 0m1.479s 00:11:43.589 ************************************ 00:11:43.589 END TEST raid_rebuild_test_io 00:11:43.589 ************************************ 00:11:43.589 15:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.589 15:27:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.849 15:27:42 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:43.849 15:27:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:43.849 15:27:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.849 15:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.849 ************************************ 00:11:43.849 START TEST raid_rebuild_test_sb_io 00:11:43.849 ************************************ 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89037 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89037 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89037 ']' 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.849 15:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.849 [2024-11-26 15:27:42.204822] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:11:43.849 [2024-11-26 15:27:42.205044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:43.849 Zero copy mechanism will not be used. 00:11:43.849 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89037 ] 00:11:44.108 [2024-11-26 15:27:42.340975] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:44.108 [2024-11-26 15:27:42.376423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.109 [2024-11-26 15:27:42.415929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.109 [2024-11-26 15:27:42.493804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.109 [2024-11-26 15:27:42.493955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.677 BaseBdev1_malloc 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.677 [2024-11-26 15:27:43.065825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:44.677 [2024-11-26 15:27:43.066020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.677 [2024-11-26 15:27:43.066083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:44.677 [2024-11-26 15:27:43.066127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.677 [2024-11-26 15:27:43.068895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.677 [2024-11-26 15:27:43.068985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:44.677 BaseBdev1 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.677 BaseBdev2_malloc 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.677 [2024-11-26 15:27:43.101128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:44.677 [2024-11-26 15:27:43.101267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.677 [2024-11-26 15:27:43.101295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:44.677 [2024-11-26 15:27:43.101309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.677 [2024-11-26 15:27:43.104014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.677 BaseBdev2 00:11:44.677 [2024-11-26 15:27:43.104107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.677 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.677 spare_malloc 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.678 spare_delay 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.678 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.678 [2024-11-26 15:27:43.148248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:44.678 [2024-11-26 15:27:43.148378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.678 [2024-11-26 15:27:43.148422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:44.678 [2024-11-26 15:27:43.148463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.938 [2024-11-26 15:27:43.151416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.938 [2024-11-26 15:27:43.151502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:44.938 spare 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.938 [2024-11-26 15:27:43.160326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.938 [2024-11-26 15:27:43.162913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.938 [2024-11-26 15:27:43.163135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:44.938 [2024-11-26 15:27:43.163205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.938 [2024-11-26 15:27:43.163554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:44.938 [2024-11-26 15:27:43.163774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:44.938 [2024-11-26 15:27:43.163821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:44.938 [2024-11-26 15:27:43.164007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.938 "name": "raid_bdev1", 00:11:44.938 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:44.938 "strip_size_kb": 0, 00:11:44.938 "state": "online", 00:11:44.938 "raid_level": "raid1", 00:11:44.938 "superblock": true, 00:11:44.938 "num_base_bdevs": 2, 00:11:44.938 "num_base_bdevs_discovered": 2, 00:11:44.938 "num_base_bdevs_operational": 2, 00:11:44.938 "base_bdevs_list": [ 00:11:44.938 { 00:11:44.938 "name": "BaseBdev1", 00:11:44.938 "uuid": "e2ef29bb-c9a8-5872-ada4-9abf36a32375", 00:11:44.938 "is_configured": true, 00:11:44.938 "data_offset": 2048, 00:11:44.938 "data_size": 63488 00:11:44.938 }, 00:11:44.938 { 00:11:44.938 "name": "BaseBdev2", 00:11:44.938 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:44.938 "is_configured": true, 00:11:44.938 "data_offset": 2048, 00:11:44.938 "data_size": 63488 00:11:44.938 } 00:11:44.938 ] 00:11:44.938 }' 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.938 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.198 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:45.198 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:45.198 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.198 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.198 [2024-11-26 15:27:43.632828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.198 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.198 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.457 [2024-11-26 15:27:43.724446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.457 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.458 "name": "raid_bdev1", 00:11:45.458 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:45.458 "strip_size_kb": 0, 00:11:45.458 "state": "online", 00:11:45.458 "raid_level": "raid1", 00:11:45.458 "superblock": true, 00:11:45.458 "num_base_bdevs": 2, 00:11:45.458 "num_base_bdevs_discovered": 1, 00:11:45.458 "num_base_bdevs_operational": 1, 00:11:45.458 "base_bdevs_list": [ 00:11:45.458 { 00:11:45.458 "name": null, 00:11:45.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.458 "is_configured": false, 00:11:45.458 "data_offset": 0, 00:11:45.458 "data_size": 63488 00:11:45.458 }, 00:11:45.458 { 00:11:45.458 "name": "BaseBdev2", 00:11:45.458 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:45.458 "is_configured": true, 00:11:45.458 "data_offset": 2048, 00:11:45.458 "data_size": 63488 00:11:45.458 } 00:11:45.458 ] 00:11:45.458 }' 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.458 15:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.458 [2024-11-26 15:27:43.815905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:45.458 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:45.458 Zero copy mechanism will not be used. 00:11:45.458 Running I/O for 60 seconds... 00:11:45.717 15:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:45.717 15:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.717 15:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.976 [2024-11-26 15:27:44.211239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:45.976 15:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.976 15:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:45.976 [2024-11-26 15:27:44.291895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:45.976 [2024-11-26 15:27:44.294226] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:45.976 [2024-11-26 15:27:44.418230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:45.976 [2024-11-26 15:27:44.418944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:46.235 [2024-11-26 15:27:44.640283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:46.235 [2024-11-26 15:27:44.640704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:46.494 124.00 IOPS, 372.00 MiB/s [2024-11-26T15:27:44.973Z] [2024-11-26 15:27:44.967747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:46.494 [2024-11-26 15:27:44.968492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:46.752 [2024-11-26 15:27:45.178808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:46.753 [2024-11-26 15:27:45.179326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.011 "name": "raid_bdev1", 00:11:47.011 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:47.011 "strip_size_kb": 0, 00:11:47.011 "state": "online", 00:11:47.011 "raid_level": "raid1", 00:11:47.011 "superblock": true, 00:11:47.011 "num_base_bdevs": 2, 00:11:47.011 "num_base_bdevs_discovered": 2, 00:11:47.011 "num_base_bdevs_operational": 2, 00:11:47.011 "process": { 00:11:47.011 "type": "rebuild", 00:11:47.011 "target": "spare", 00:11:47.011 "progress": { 00:11:47.011 "blocks": 10240, 00:11:47.011 "percent": 16 00:11:47.011 } 00:11:47.011 }, 00:11:47.011 "base_bdevs_list": [ 00:11:47.011 { 00:11:47.011 "name": "spare", 00:11:47.011 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:47.011 "is_configured": true, 00:11:47.011 "data_offset": 2048, 00:11:47.011 "data_size": 63488 00:11:47.011 }, 00:11:47.011 { 00:11:47.011 "name": "BaseBdev2", 00:11:47.011 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:47.011 "is_configured": true, 00:11:47.011 "data_offset": 2048, 00:11:47.011 "data_size": 63488 00:11:47.011 } 00:11:47.011 ] 00:11:47.011 }' 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.011 [2024-11-26 15:27:45.411373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.011 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.011 [2024-11-26 15:27:45.425268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:47.269 [2024-11-26 15:27:45.625773] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:47.269 [2024-11-26 15:27:45.634041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.269 [2024-11-26 15:27:45.634090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:47.269 [2024-11-26 15:27:45.634102] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:47.269 [2024-11-26 15:27:45.650366] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.269 "name": "raid_bdev1", 00:11:47.269 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:47.269 "strip_size_kb": 0, 00:11:47.269 "state": "online", 00:11:47.269 "raid_level": "raid1", 00:11:47.269 "superblock": true, 00:11:47.269 "num_base_bdevs": 2, 00:11:47.269 "num_base_bdevs_discovered": 1, 00:11:47.269 "num_base_bdevs_operational": 1, 00:11:47.269 "base_bdevs_list": [ 00:11:47.269 { 00:11:47.269 "name": null, 00:11:47.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.269 "is_configured": false, 00:11:47.269 "data_offset": 0, 00:11:47.269 "data_size": 63488 00:11:47.269 }, 00:11:47.269 { 00:11:47.269 "name": "BaseBdev2", 00:11:47.269 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:47.269 "is_configured": true, 00:11:47.269 "data_offset": 2048, 00:11:47.269 "data_size": 63488 00:11:47.269 } 00:11:47.269 ] 00:11:47.269 }' 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.269 15:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.787 112.50 IOPS, 337.50 MiB/s [2024-11-26T15:27:46.266Z] 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.787 "name": "raid_bdev1", 00:11:47.787 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:47.787 "strip_size_kb": 0, 00:11:47.787 "state": "online", 00:11:47.787 "raid_level": "raid1", 00:11:47.787 "superblock": true, 00:11:47.787 "num_base_bdevs": 2, 00:11:47.787 "num_base_bdevs_discovered": 1, 00:11:47.787 "num_base_bdevs_operational": 1, 00:11:47.787 "base_bdevs_list": [ 00:11:47.787 { 00:11:47.787 "name": null, 00:11:47.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.787 "is_configured": false, 00:11:47.787 "data_offset": 0, 00:11:47.787 "data_size": 63488 00:11:47.787 }, 00:11:47.787 { 00:11:47.787 "name": "BaseBdev2", 00:11:47.787 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:47.787 "is_configured": true, 00:11:47.787 "data_offset": 2048, 00:11:47.787 "data_size": 63488 00:11:47.787 } 00:11:47.787 ] 00:11:47.787 }' 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.787 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.787 [2024-11-26 15:27:46.250241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.045 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.045 15:27:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:48.045 [2024-11-26 15:27:46.301439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:11:48.046 [2024-11-26 15:27:46.303711] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:48.046 [2024-11-26 15:27:46.414066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:48.046 [2024-11-26 15:27:46.414902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:48.304 [2024-11-26 15:27:46.631452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:48.304 [2024-11-26 15:27:46.631937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:48.564 152.00 IOPS, 456.00 MiB/s [2024-11-26T15:27:47.043Z] [2024-11-26 15:27:46.995249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:48.564 [2024-11-26 15:27:46.996140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:48.824 [2024-11-26 15:27:47.112364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:48.824 [2024-11-26 15:27:47.112874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:48.824 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.824 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.824 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.824 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.824 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.083 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.083 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.083 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.083 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.083 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.083 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.083 "name": "raid_bdev1", 00:11:49.084 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:49.084 "strip_size_kb": 0, 00:11:49.084 "state": "online", 00:11:49.084 "raid_level": "raid1", 00:11:49.084 "superblock": true, 00:11:49.084 "num_base_bdevs": 2, 00:11:49.084 "num_base_bdevs_discovered": 2, 00:11:49.084 "num_base_bdevs_operational": 2, 00:11:49.084 "process": { 00:11:49.084 "type": "rebuild", 00:11:49.084 "target": "spare", 00:11:49.084 "progress": { 00:11:49.084 "blocks": 10240, 00:11:49.084 "percent": 16 00:11:49.084 } 00:11:49.084 }, 00:11:49.084 "base_bdevs_list": [ 00:11:49.084 { 00:11:49.084 "name": "spare", 00:11:49.084 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:49.084 "is_configured": true, 00:11:49.084 "data_offset": 2048, 00:11:49.084 "data_size": 63488 00:11:49.084 }, 00:11:49.084 { 00:11:49.084 "name": "BaseBdev2", 00:11:49.084 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:49.084 "is_configured": true, 00:11:49.084 "data_offset": 2048, 00:11:49.084 "data_size": 63488 00:11:49.084 } 00:11:49.084 ] 00:11:49.084 }' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:49.084 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=326 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:49.084 [2024-11-26 15:27:47.445683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.084 "name": "raid_bdev1", 00:11:49.084 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:49.084 "strip_size_kb": 0, 00:11:49.084 "state": "online", 00:11:49.084 "raid_level": "raid1", 00:11:49.084 "superblock": true, 00:11:49.084 "num_base_bdevs": 2, 00:11:49.084 "num_base_bdevs_discovered": 2, 00:11:49.084 "num_base_bdevs_operational": 2, 00:11:49.084 "process": { 00:11:49.084 "type": "rebuild", 00:11:49.084 "target": "spare", 00:11:49.084 "progress": { 00:11:49.084 "blocks": 14336, 00:11:49.084 "percent": 22 00:11:49.084 } 00:11:49.084 }, 00:11:49.084 "base_bdevs_list": [ 00:11:49.084 { 00:11:49.084 "name": "spare", 00:11:49.084 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:49.084 "is_configured": true, 00:11:49.084 "data_offset": 2048, 00:11:49.084 "data_size": 63488 00:11:49.084 }, 00:11:49.084 { 00:11:49.084 "name": "BaseBdev2", 00:11:49.084 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:49.084 "is_configured": true, 00:11:49.084 "data_offset": 2048, 00:11:49.084 "data_size": 63488 00:11:49.084 } 00:11:49.084 ] 00:11:49.084 }' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:49.084 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.343 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:49.343 15:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:49.343 [2024-11-26 15:27:47.669173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:49.343 [2024-11-26 15:27:47.669694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:49.602 138.75 IOPS, 416.25 MiB/s [2024-11-26T15:27:48.081Z] [2024-11-26 15:27:48.016414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:49.862 [2024-11-26 15:27:48.124277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:50.121 [2024-11-26 15:27:48.556853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.121 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.380 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.380 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.380 "name": "raid_bdev1", 00:11:50.380 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:50.380 "strip_size_kb": 0, 00:11:50.380 "state": "online", 00:11:50.380 "raid_level": "raid1", 00:11:50.380 "superblock": true, 00:11:50.380 "num_base_bdevs": 2, 00:11:50.380 "num_base_bdevs_discovered": 2, 00:11:50.381 "num_base_bdevs_operational": 2, 00:11:50.381 "process": { 00:11:50.381 "type": "rebuild", 00:11:50.381 "target": "spare", 00:11:50.381 "progress": { 00:11:50.381 "blocks": 28672, 00:11:50.381 "percent": 45 00:11:50.381 } 00:11:50.381 }, 00:11:50.381 "base_bdevs_list": [ 00:11:50.381 { 00:11:50.381 "name": "spare", 00:11:50.381 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:50.381 "is_configured": true, 00:11:50.381 "data_offset": 2048, 00:11:50.381 "data_size": 63488 00:11:50.381 }, 00:11:50.381 { 00:11:50.381 "name": "BaseBdev2", 00:11:50.381 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:50.381 "is_configured": true, 00:11:50.381 "data_offset": 2048, 00:11:50.381 "data_size": 63488 00:11:50.381 } 00:11:50.381 ] 00:11:50.381 }' 00:11:50.381 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.381 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.381 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.381 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.381 15:27:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:50.640 124.80 IOPS, 374.40 MiB/s [2024-11-26T15:27:49.119Z] [2024-11-26 15:27:48.915072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:50.965 [2024-11-26 15:27:49.252203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:51.229 [2024-11-26 15:27:49.694699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:51.229 [2024-11-26 15:27:49.695514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.489 "name": "raid_bdev1", 00:11:51.489 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:51.489 "strip_size_kb": 0, 00:11:51.489 "state": "online", 00:11:51.489 "raid_level": "raid1", 00:11:51.489 "superblock": true, 00:11:51.489 "num_base_bdevs": 2, 00:11:51.489 "num_base_bdevs_discovered": 2, 00:11:51.489 "num_base_bdevs_operational": 2, 00:11:51.489 "process": { 00:11:51.489 "type": "rebuild", 00:11:51.489 "target": "spare", 00:11:51.489 "progress": { 00:11:51.489 "blocks": 45056, 00:11:51.489 "percent": 70 00:11:51.489 } 00:11:51.489 }, 00:11:51.489 "base_bdevs_list": [ 00:11:51.489 { 00:11:51.489 "name": "spare", 00:11:51.489 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:51.489 "is_configured": true, 00:11:51.489 "data_offset": 2048, 00:11:51.489 "data_size": 63488 00:11:51.489 }, 00:11:51.489 { 00:11:51.489 "name": "BaseBdev2", 00:11:51.489 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:51.489 "is_configured": true, 00:11:51.489 "data_offset": 2048, 00:11:51.489 "data_size": 63488 00:11:51.489 } 00:11:51.489 ] 00:11:51.489 }' 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.489 112.33 IOPS, 337.00 MiB/s [2024-11-26T15:27:49.968Z] 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.489 15:27:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:52.057 [2024-11-26 15:27:50.230740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:52.316 [2024-11-26 15:27:50.774536] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:52.316 [2024-11-26 15:27:50.783844] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:52.575 [2024-11-26 15:27:50.792480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.575 104.29 IOPS, 312.86 MiB/s [2024-11-26T15:27:51.054Z] 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.575 "name": "raid_bdev1", 00:11:52.575 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:52.575 "strip_size_kb": 0, 00:11:52.575 "state": "online", 00:11:52.575 "raid_level": "raid1", 00:11:52.575 "superblock": true, 00:11:52.575 "num_base_bdevs": 2, 00:11:52.575 "num_base_bdevs_discovered": 2, 00:11:52.575 "num_base_bdevs_operational": 2, 00:11:52.575 "base_bdevs_list": [ 00:11:52.575 { 00:11:52.575 "name": "spare", 00:11:52.575 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:52.575 "is_configured": true, 00:11:52.575 "data_offset": 2048, 00:11:52.575 "data_size": 63488 00:11:52.575 }, 00:11:52.575 { 00:11:52.575 "name": "BaseBdev2", 00:11:52.575 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:52.575 "is_configured": true, 00:11:52.575 "data_offset": 2048, 00:11:52.575 "data_size": 63488 00:11:52.575 } 00:11:52.575 ] 00:11:52.575 }' 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:52.575 15:27:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.575 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:52.575 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.576 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.834 "name": "raid_bdev1", 00:11:52.834 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:52.834 "strip_size_kb": 0, 00:11:52.834 "state": "online", 00:11:52.834 "raid_level": "raid1", 00:11:52.834 "superblock": true, 00:11:52.834 "num_base_bdevs": 2, 00:11:52.834 "num_base_bdevs_discovered": 2, 00:11:52.834 "num_base_bdevs_operational": 2, 00:11:52.834 "base_bdevs_list": [ 00:11:52.834 { 00:11:52.834 "name": "spare", 00:11:52.834 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:52.834 "is_configured": true, 00:11:52.834 "data_offset": 2048, 00:11:52.834 "data_size": 63488 00:11:52.834 }, 00:11:52.834 { 00:11:52.834 "name": "BaseBdev2", 00:11:52.834 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:52.834 "is_configured": true, 00:11:52.834 "data_offset": 2048, 00:11:52.834 "data_size": 63488 00:11:52.834 } 00:11:52.834 ] 00:11:52.834 }' 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.834 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.834 "name": "raid_bdev1", 00:11:52.834 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:52.834 "strip_size_kb": 0, 00:11:52.834 "state": "online", 00:11:52.834 "raid_level": "raid1", 00:11:52.834 "superblock": true, 00:11:52.834 "num_base_bdevs": 2, 00:11:52.834 "num_base_bdevs_discovered": 2, 00:11:52.834 "num_base_bdevs_operational": 2, 00:11:52.834 "base_bdevs_list": [ 00:11:52.834 { 00:11:52.834 "name": "spare", 00:11:52.834 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:52.834 "is_configured": true, 00:11:52.834 "data_offset": 2048, 00:11:52.834 "data_size": 63488 00:11:52.834 }, 00:11:52.834 { 00:11:52.834 "name": "BaseBdev2", 00:11:52.834 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:52.834 "is_configured": true, 00:11:52.834 "data_offset": 2048, 00:11:52.834 "data_size": 63488 00:11:52.834 } 00:11:52.835 ] 00:11:52.835 }' 00:11:52.835 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.835 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.402 [2024-11-26 15:27:51.596891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.402 [2024-11-26 15:27:51.596941] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.402 00:11:53.402 Latency(us) 00:11:53.402 [2024-11-26T15:27:51.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.402 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:53.402 raid_bdev1 : 7.84 96.66 289.98 0.00 0.00 13878.24 280.25 111959.00 00:11:53.402 [2024-11-26T15:27:51.881Z] =================================================================================================================== 00:11:53.402 [2024-11-26T15:27:51.881Z] Total : 96.66 289.98 0.00 0.00 13878.24 280.25 111959.00 00:11:53.402 [2024-11-26 15:27:51.664362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.402 [2024-11-26 15:27:51.664411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.402 [2024-11-26 15:27:51.664494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.402 [2024-11-26 15:27:51.664505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:53.402 { 00:11:53.402 "results": [ 00:11:53.402 { 00:11:53.402 "job": "raid_bdev1", 00:11:53.402 "core_mask": "0x1", 00:11:53.402 "workload": "randrw", 00:11:53.402 "percentage": 50, 00:11:53.402 "status": "finished", 00:11:53.402 "queue_depth": 2, 00:11:53.402 "io_size": 3145728, 00:11:53.402 "runtime": 7.841857, 00:11:53.402 "iops": 96.66077818047434, 00:11:53.402 "mibps": 289.98233454142303, 00:11:53.402 "io_failed": 0, 00:11:53.402 "io_timeout": 0, 00:11:53.402 "avg_latency_us": 13878.244293646181, 00:11:53.402 "min_latency_us": 280.25451059008105, 00:11:53.402 "max_latency_us": 111958.99938987187 00:11:53.402 } 00:11:53.402 ], 00:11:53.402 "core_count": 1 00:11:53.402 } 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.402 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:53.661 /dev/nbd0 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.661 1+0 records in 00:11:53.661 1+0 records out 00:11:53.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376974 s, 10.9 MB/s 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.661 15:27:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:53.920 /dev/nbd1 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.920 1+0 records in 00:11:53.920 1+0 records out 00:11:53.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362114 s, 11.3 MB/s 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:53.920 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.179 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.438 [2024-11-26 15:27:52.752337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:54.438 [2024-11-26 15:27:52.752398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.438 [2024-11-26 15:27:52.752421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:54.438 [2024-11-26 15:27:52.752430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.438 [2024-11-26 15:27:52.754974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.438 [2024-11-26 15:27:52.755011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:54.438 [2024-11-26 15:27:52.755090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:54.438 [2024-11-26 15:27:52.755140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.438 [2024-11-26 15:27:52.755262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.438 spare 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.438 [2024-11-26 15:27:52.855331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:54.438 [2024-11-26 15:27:52.855372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:54.438 [2024-11-26 15:27:52.855713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:11:54.438 [2024-11-26 15:27:52.855880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:54.438 [2024-11-26 15:27:52.855897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:54.438 [2024-11-26 15:27:52.856046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.438 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.439 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.439 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.439 "name": "raid_bdev1", 00:11:54.439 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:54.439 "strip_size_kb": 0, 00:11:54.439 "state": "online", 00:11:54.439 "raid_level": "raid1", 00:11:54.439 "superblock": true, 00:11:54.439 "num_base_bdevs": 2, 00:11:54.439 "num_base_bdevs_discovered": 2, 00:11:54.439 "num_base_bdevs_operational": 2, 00:11:54.439 "base_bdevs_list": [ 00:11:54.439 { 00:11:54.439 "name": "spare", 00:11:54.439 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:54.439 "is_configured": true, 00:11:54.439 "data_offset": 2048, 00:11:54.439 "data_size": 63488 00:11:54.439 }, 00:11:54.439 { 00:11:54.439 "name": "BaseBdev2", 00:11:54.439 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:54.439 "is_configured": true, 00:11:54.439 "data_offset": 2048, 00:11:54.439 "data_size": 63488 00:11:54.439 } 00:11:54.439 ] 00:11:54.439 }' 00:11:54.439 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.439 15:27:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.006 "name": "raid_bdev1", 00:11:55.006 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:55.006 "strip_size_kb": 0, 00:11:55.006 "state": "online", 00:11:55.006 "raid_level": "raid1", 00:11:55.006 "superblock": true, 00:11:55.006 "num_base_bdevs": 2, 00:11:55.006 "num_base_bdevs_discovered": 2, 00:11:55.006 "num_base_bdevs_operational": 2, 00:11:55.006 "base_bdevs_list": [ 00:11:55.006 { 00:11:55.006 "name": "spare", 00:11:55.006 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:55.006 "is_configured": true, 00:11:55.006 "data_offset": 2048, 00:11:55.006 "data_size": 63488 00:11:55.006 }, 00:11:55.006 { 00:11:55.006 "name": "BaseBdev2", 00:11:55.006 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:55.006 "is_configured": true, 00:11:55.006 "data_offset": 2048, 00:11:55.006 "data_size": 63488 00:11:55.006 } 00:11:55.006 ] 00:11:55.006 }' 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.006 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.007 [2024-11-26 15:27:53.460641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.007 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.265 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.265 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.265 "name": "raid_bdev1", 00:11:55.265 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:55.265 "strip_size_kb": 0, 00:11:55.265 "state": "online", 00:11:55.265 "raid_level": "raid1", 00:11:55.265 "superblock": true, 00:11:55.265 "num_base_bdevs": 2, 00:11:55.265 "num_base_bdevs_discovered": 1, 00:11:55.265 "num_base_bdevs_operational": 1, 00:11:55.265 "base_bdevs_list": [ 00:11:55.265 { 00:11:55.265 "name": null, 00:11:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.265 "is_configured": false, 00:11:55.265 "data_offset": 0, 00:11:55.265 "data_size": 63488 00:11:55.265 }, 00:11:55.265 { 00:11:55.265 "name": "BaseBdev2", 00:11:55.265 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:55.265 "is_configured": true, 00:11:55.265 "data_offset": 2048, 00:11:55.265 "data_size": 63488 00:11:55.265 } 00:11:55.266 ] 00:11:55.266 }' 00:11:55.266 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.266 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.524 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:55.524 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.524 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.524 [2024-11-26 15:27:53.904888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.524 [2024-11-26 15:27:53.905138] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:55.524 [2024-11-26 15:27:53.905159] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:55.524 [2024-11-26 15:27:53.905215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.524 [2024-11-26 15:27:53.914578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:11:55.524 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.524 15:27:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:55.524 [2024-11-26 15:27:53.916798] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.460 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.720 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.720 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.720 "name": "raid_bdev1", 00:11:56.720 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:56.720 "strip_size_kb": 0, 00:11:56.720 "state": "online", 00:11:56.720 "raid_level": "raid1", 00:11:56.720 "superblock": true, 00:11:56.720 "num_base_bdevs": 2, 00:11:56.720 "num_base_bdevs_discovered": 2, 00:11:56.720 "num_base_bdevs_operational": 2, 00:11:56.720 "process": { 00:11:56.720 "type": "rebuild", 00:11:56.720 "target": "spare", 00:11:56.720 "progress": { 00:11:56.720 "blocks": 20480, 00:11:56.720 "percent": 32 00:11:56.720 } 00:11:56.720 }, 00:11:56.720 "base_bdevs_list": [ 00:11:56.720 { 00:11:56.720 "name": "spare", 00:11:56.720 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:56.720 "is_configured": true, 00:11:56.720 "data_offset": 2048, 00:11:56.720 "data_size": 63488 00:11:56.720 }, 00:11:56.720 { 00:11:56.720 "name": "BaseBdev2", 00:11:56.720 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:56.720 "is_configured": true, 00:11:56.720 "data_offset": 2048, 00:11:56.720 "data_size": 63488 00:11:56.720 } 00:11:56.720 ] 00:11:56.720 }' 00:11:56.720 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.720 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.720 15:27:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.720 [2024-11-26 15:27:55.046633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.720 [2024-11-26 15:27:55.125375] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:56.720 [2024-11-26 15:27:55.125479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.720 [2024-11-26 15:27:55.125496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:56.720 [2024-11-26 15:27:55.125506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.720 "name": "raid_bdev1", 00:11:56.720 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:56.720 "strip_size_kb": 0, 00:11:56.720 "state": "online", 00:11:56.720 "raid_level": "raid1", 00:11:56.720 "superblock": true, 00:11:56.720 "num_base_bdevs": 2, 00:11:56.720 "num_base_bdevs_discovered": 1, 00:11:56.720 "num_base_bdevs_operational": 1, 00:11:56.720 "base_bdevs_list": [ 00:11:56.720 { 00:11:56.720 "name": null, 00:11:56.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.720 "is_configured": false, 00:11:56.720 "data_offset": 0, 00:11:56.720 "data_size": 63488 00:11:56.720 }, 00:11:56.720 { 00:11:56.720 "name": "BaseBdev2", 00:11:56.720 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:56.720 "is_configured": true, 00:11:56.720 "data_offset": 2048, 00:11:56.720 "data_size": 63488 00:11:56.720 } 00:11:56.720 ] 00:11:56.720 }' 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.720 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.288 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.288 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.288 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.288 [2024-11-26 15:27:55.562745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.288 [2024-11-26 15:27:55.562825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.288 [2024-11-26 15:27:55.562851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:57.288 [2024-11-26 15:27:55.562863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.288 [2024-11-26 15:27:55.563365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.288 [2024-11-26 15:27:55.563396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.288 [2024-11-26 15:27:55.563494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:57.288 [2024-11-26 15:27:55.563510] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:57.288 [2024-11-26 15:27:55.563521] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:57.288 [2024-11-26 15:27:55.563551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.288 [2024-11-26 15:27:55.568854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:11:57.288 spare 00:11:57.288 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.288 15:27:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:57.288 [2024-11-26 15:27:55.570761] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.225 "name": "raid_bdev1", 00:11:58.225 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:58.225 "strip_size_kb": 0, 00:11:58.225 "state": "online", 00:11:58.225 "raid_level": "raid1", 00:11:58.225 "superblock": true, 00:11:58.225 "num_base_bdevs": 2, 00:11:58.225 "num_base_bdevs_discovered": 2, 00:11:58.225 "num_base_bdevs_operational": 2, 00:11:58.225 "process": { 00:11:58.225 "type": "rebuild", 00:11:58.225 "target": "spare", 00:11:58.225 "progress": { 00:11:58.225 "blocks": 20480, 00:11:58.225 "percent": 32 00:11:58.225 } 00:11:58.225 }, 00:11:58.225 "base_bdevs_list": [ 00:11:58.225 { 00:11:58.225 "name": "spare", 00:11:58.225 "uuid": "69e97d38-11c8-5bb3-a195-070290db7d6b", 00:11:58.225 "is_configured": true, 00:11:58.225 "data_offset": 2048, 00:11:58.225 "data_size": 63488 00:11:58.225 }, 00:11:58.225 { 00:11:58.225 "name": "BaseBdev2", 00:11:58.225 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:58.225 "is_configured": true, 00:11:58.225 "data_offset": 2048, 00:11:58.225 "data_size": 63488 00:11:58.225 } 00:11:58.225 ] 00:11:58.225 }' 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.225 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.484 [2024-11-26 15:27:56.705903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.484 [2024-11-26 15:27:56.777883] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:58.484 [2024-11-26 15:27:56.777953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.484 [2024-11-26 15:27:56.777982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.484 [2024-11-26 15:27:56.777990] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.484 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.484 "name": "raid_bdev1", 00:11:58.484 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:58.484 "strip_size_kb": 0, 00:11:58.484 "state": "online", 00:11:58.484 "raid_level": "raid1", 00:11:58.484 "superblock": true, 00:11:58.485 "num_base_bdevs": 2, 00:11:58.485 "num_base_bdevs_discovered": 1, 00:11:58.485 "num_base_bdevs_operational": 1, 00:11:58.485 "base_bdevs_list": [ 00:11:58.485 { 00:11:58.485 "name": null, 00:11:58.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.485 "is_configured": false, 00:11:58.485 "data_offset": 0, 00:11:58.485 "data_size": 63488 00:11:58.485 }, 00:11:58.485 { 00:11:58.485 "name": "BaseBdev2", 00:11:58.485 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:58.485 "is_configured": true, 00:11:58.485 "data_offset": 2048, 00:11:58.485 "data_size": 63488 00:11:58.485 } 00:11:58.485 ] 00:11:58.485 }' 00:11:58.485 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.485 15:27:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.052 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.052 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.052 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.052 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.052 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.053 "name": "raid_bdev1", 00:11:59.053 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:59.053 "strip_size_kb": 0, 00:11:59.053 "state": "online", 00:11:59.053 "raid_level": "raid1", 00:11:59.053 "superblock": true, 00:11:59.053 "num_base_bdevs": 2, 00:11:59.053 "num_base_bdevs_discovered": 1, 00:11:59.053 "num_base_bdevs_operational": 1, 00:11:59.053 "base_bdevs_list": [ 00:11:59.053 { 00:11:59.053 "name": null, 00:11:59.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.053 "is_configured": false, 00:11:59.053 "data_offset": 0, 00:11:59.053 "data_size": 63488 00:11:59.053 }, 00:11:59.053 { 00:11:59.053 "name": "BaseBdev2", 00:11:59.053 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:59.053 "is_configured": true, 00:11:59.053 "data_offset": 2048, 00:11:59.053 "data_size": 63488 00:11:59.053 } 00:11:59.053 ] 00:11:59.053 }' 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.053 [2024-11-26 15:27:57.387265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:59.053 [2024-11-26 15:27:57.387327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.053 [2024-11-26 15:27:57.387352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:59.053 [2024-11-26 15:27:57.387361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.053 [2024-11-26 15:27:57.387763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.053 [2024-11-26 15:27:57.387789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.053 [2024-11-26 15:27:57.387868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:59.053 [2024-11-26 15:27:57.387892] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:59.053 [2024-11-26 15:27:57.387904] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:59.053 [2024-11-26 15:27:57.387914] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:59.053 BaseBdev1 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.053 15:27:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.992 "name": "raid_bdev1", 00:11:59.992 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:11:59.992 "strip_size_kb": 0, 00:11:59.992 "state": "online", 00:11:59.992 "raid_level": "raid1", 00:11:59.992 "superblock": true, 00:11:59.992 "num_base_bdevs": 2, 00:11:59.992 "num_base_bdevs_discovered": 1, 00:11:59.992 "num_base_bdevs_operational": 1, 00:11:59.992 "base_bdevs_list": [ 00:11:59.992 { 00:11:59.992 "name": null, 00:11:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.992 "is_configured": false, 00:11:59.992 "data_offset": 0, 00:11:59.992 "data_size": 63488 00:11:59.992 }, 00:11:59.992 { 00:11:59.992 "name": "BaseBdev2", 00:11:59.992 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:11:59.992 "is_configured": true, 00:11:59.992 "data_offset": 2048, 00:11:59.992 "data_size": 63488 00:11:59.992 } 00:11:59.992 ] 00:11:59.992 }' 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.992 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.568 "name": "raid_bdev1", 00:12:00.568 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:12:00.568 "strip_size_kb": 0, 00:12:00.568 "state": "online", 00:12:00.568 "raid_level": "raid1", 00:12:00.568 "superblock": true, 00:12:00.568 "num_base_bdevs": 2, 00:12:00.568 "num_base_bdevs_discovered": 1, 00:12:00.568 "num_base_bdevs_operational": 1, 00:12:00.568 "base_bdevs_list": [ 00:12:00.568 { 00:12:00.568 "name": null, 00:12:00.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.568 "is_configured": false, 00:12:00.568 "data_offset": 0, 00:12:00.568 "data_size": 63488 00:12:00.568 }, 00:12:00.568 { 00:12:00.568 "name": "BaseBdev2", 00:12:00.568 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:12:00.568 "is_configured": true, 00:12:00.568 "data_offset": 2048, 00:12:00.568 "data_size": 63488 00:12:00.568 } 00:12:00.568 ] 00:12:00.568 }' 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.568 [2024-11-26 15:27:58.963875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.568 [2024-11-26 15:27:58.964055] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:00.568 [2024-11-26 15:27:58.964075] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:00.568 request: 00:12:00.568 { 00:12:00.568 "base_bdev": "BaseBdev1", 00:12:00.568 "raid_bdev": "raid_bdev1", 00:12:00.568 "method": "bdev_raid_add_base_bdev", 00:12:00.568 "req_id": 1 00:12:00.568 } 00:12:00.568 Got JSON-RPC error response 00:12:00.568 response: 00:12:00.568 { 00:12:00.568 "code": -22, 00:12:00.568 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:00.568 } 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:00.568 15:27:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:01.505 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:01.505 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.505 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.505 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.763 15:27:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.763 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.763 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.763 "name": "raid_bdev1", 00:12:01.763 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:12:01.763 "strip_size_kb": 0, 00:12:01.763 "state": "online", 00:12:01.763 "raid_level": "raid1", 00:12:01.763 "superblock": true, 00:12:01.763 "num_base_bdevs": 2, 00:12:01.763 "num_base_bdevs_discovered": 1, 00:12:01.763 "num_base_bdevs_operational": 1, 00:12:01.763 "base_bdevs_list": [ 00:12:01.763 { 00:12:01.763 "name": null, 00:12:01.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.763 "is_configured": false, 00:12:01.763 "data_offset": 0, 00:12:01.763 "data_size": 63488 00:12:01.763 }, 00:12:01.763 { 00:12:01.763 "name": "BaseBdev2", 00:12:01.763 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:12:01.763 "is_configured": true, 00:12:01.763 "data_offset": 2048, 00:12:01.763 "data_size": 63488 00:12:01.763 } 00:12:01.763 ] 00:12:01.763 }' 00:12:01.763 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.763 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.021 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.021 "name": "raid_bdev1", 00:12:02.021 "uuid": "3f65317a-fb7b-4ba8-ba3c-c14391c41e2e", 00:12:02.021 "strip_size_kb": 0, 00:12:02.021 "state": "online", 00:12:02.021 "raid_level": "raid1", 00:12:02.021 "superblock": true, 00:12:02.021 "num_base_bdevs": 2, 00:12:02.021 "num_base_bdevs_discovered": 1, 00:12:02.021 "num_base_bdevs_operational": 1, 00:12:02.021 "base_bdevs_list": [ 00:12:02.021 { 00:12:02.021 "name": null, 00:12:02.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.021 "is_configured": false, 00:12:02.022 "data_offset": 0, 00:12:02.022 "data_size": 63488 00:12:02.022 }, 00:12:02.022 { 00:12:02.022 "name": "BaseBdev2", 00:12:02.022 "uuid": "de38ab37-0bd0-5e1f-9608-0d8c700a7a80", 00:12:02.022 "is_configured": true, 00:12:02.022 "data_offset": 2048, 00:12:02.022 "data_size": 63488 00:12:02.022 } 00:12:02.022 ] 00:12:02.022 }' 00:12:02.022 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89037 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89037 ']' 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89037 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89037 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.280 killing process with pid 89037 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89037' 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89037 00:12:02.280 Received shutdown signal, test time was about 16.757749 seconds 00:12:02.280 00:12:02.280 Latency(us) 00:12:02.280 [2024-11-26T15:28:00.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.280 [2024-11-26T15:28:00.759Z] =================================================================================================================== 00:12:02.280 [2024-11-26T15:28:00.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.280 [2024-11-26 15:28:00.577343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.280 [2024-11-26 15:28:00.577489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.280 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89037 00:12:02.280 [2024-11-26 15:28:00.577554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.280 [2024-11-26 15:28:00.577567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:02.280 [2024-11-26 15:28:00.604516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:02.540 00:12:02.540 real 0m18.704s 00:12:02.540 user 0m24.658s 00:12:02.540 sys 0m2.261s 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.540 ************************************ 00:12:02.540 END TEST raid_rebuild_test_sb_io 00:12:02.540 ************************************ 00:12:02.540 15:28:00 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:02.540 15:28:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:02.540 15:28:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:02.540 15:28:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.540 15:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.540 ************************************ 00:12:02.540 START TEST raid_rebuild_test 00:12:02.540 ************************************ 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=89717 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 89717 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 89717 ']' 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.540 15:28:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.540 [2024-11-26 15:28:00.980952] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:02.540 [2024-11-26 15:28:00.981086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89717 ] 00:12:02.540 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:02.540 Zero copy mechanism will not be used. 00:12:02.798 [2024-11-26 15:28:01.115339] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:02.798 [2024-11-26 15:28:01.151717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.798 [2024-11-26 15:28:01.176583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.798 [2024-11-26 15:28:01.218462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.798 [2024-11-26 15:28:01.218509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.365 BaseBdev1_malloc 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.365 [2024-11-26 15:28:01.817420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:03.365 [2024-11-26 15:28:01.817485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.365 [2024-11-26 15:28:01.817511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.365 [2024-11-26 15:28:01.817525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.365 [2024-11-26 15:28:01.819545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.365 [2024-11-26 15:28:01.819580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.365 BaseBdev1 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.365 BaseBdev2_malloc 00:12:03.365 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.624 [2024-11-26 15:28:01.846291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:03.624 [2024-11-26 15:28:01.846361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.624 [2024-11-26 15:28:01.846379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.624 [2024-11-26 15:28:01.846389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.624 [2024-11-26 15:28:01.848382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.624 [2024-11-26 15:28:01.848420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.624 BaseBdev2 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.624 BaseBdev3_malloc 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.624 [2024-11-26 15:28:01.874834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:03.624 [2024-11-26 15:28:01.874885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.624 [2024-11-26 15:28:01.874918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:03.624 [2024-11-26 15:28:01.874928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.624 [2024-11-26 15:28:01.876919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.624 [2024-11-26 15:28:01.876961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:03.624 BaseBdev3 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.624 BaseBdev4_malloc 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.624 [2024-11-26 15:28:01.919494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:03.624 [2024-11-26 15:28:01.919600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.624 [2024-11-26 15:28:01.919647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:03.624 [2024-11-26 15:28:01.919671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.624 [2024-11-26 15:28:01.924140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.624 [2024-11-26 15:28:01.924237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:03.624 BaseBdev4 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.624 spare_malloc 00:12:03.624 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 spare_delay 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 [2024-11-26 15:28:01.961926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.625 [2024-11-26 15:28:01.962001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.625 [2024-11-26 15:28:01.962024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:03.625 [2024-11-26 15:28:01.962037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.625 [2024-11-26 15:28:01.964037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.625 [2024-11-26 15:28:01.964076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.625 spare 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 [2024-11-26 15:28:01.973982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.625 [2024-11-26 15:28:01.975790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.625 [2024-11-26 15:28:01.975854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.625 [2024-11-26 15:28:01.975898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.625 [2024-11-26 15:28:01.975982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:03.625 [2024-11-26 15:28:01.975999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:03.625 [2024-11-26 15:28:01.976240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:03.625 [2024-11-26 15:28:01.976390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:03.625 [2024-11-26 15:28:01.976403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:03.625 [2024-11-26 15:28:01.976522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 15:28:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.625 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.625 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.625 "name": "raid_bdev1", 00:12:03.625 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:03.625 "strip_size_kb": 0, 00:12:03.625 "state": "online", 00:12:03.625 "raid_level": "raid1", 00:12:03.625 "superblock": false, 00:12:03.625 "num_base_bdevs": 4, 00:12:03.625 "num_base_bdevs_discovered": 4, 00:12:03.625 "num_base_bdevs_operational": 4, 00:12:03.625 "base_bdevs_list": [ 00:12:03.625 { 00:12:03.625 "name": "BaseBdev1", 00:12:03.625 "uuid": "70d2ccd9-1b3f-56be-a296-5caac885c3df", 00:12:03.625 "is_configured": true, 00:12:03.625 "data_offset": 0, 00:12:03.625 "data_size": 65536 00:12:03.625 }, 00:12:03.625 { 00:12:03.625 "name": "BaseBdev2", 00:12:03.625 "uuid": "fd052376-6b31-503e-bb44-c7e04c7a4c74", 00:12:03.625 "is_configured": true, 00:12:03.625 "data_offset": 0, 00:12:03.625 "data_size": 65536 00:12:03.625 }, 00:12:03.625 { 00:12:03.625 "name": "BaseBdev3", 00:12:03.625 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:03.625 "is_configured": true, 00:12:03.625 "data_offset": 0, 00:12:03.625 "data_size": 65536 00:12:03.625 }, 00:12:03.625 { 00:12:03.625 "name": "BaseBdev4", 00:12:03.625 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:03.625 "is_configured": true, 00:12:03.625 "data_offset": 0, 00:12:03.625 "data_size": 65536 00:12:03.625 } 00:12:03.625 ] 00:12:03.625 }' 00:12:03.625 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.625 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.192 [2024-11-26 15:28:02.434385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:04.192 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:04.451 [2024-11-26 15:28:02.706288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:04.451 /dev/nbd0 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.451 1+0 records in 00:12:04.451 1+0 records out 00:12:04.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289197 s, 14.2 MB/s 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:04.451 15:28:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:09.722 65536+0 records in 00:12:09.722 65536+0 records out 00:12:09.722 33554432 bytes (34 MB, 32 MiB) copied, 4.83407 s, 6.9 MB/s 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:09.722 [2024-11-26 15:28:07.813062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.722 [2024-11-26 15:28:07.825165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.722 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.722 "name": "raid_bdev1", 00:12:09.722 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:09.722 "strip_size_kb": 0, 00:12:09.722 "state": "online", 00:12:09.722 "raid_level": "raid1", 00:12:09.722 "superblock": false, 00:12:09.722 "num_base_bdevs": 4, 00:12:09.722 "num_base_bdevs_discovered": 3, 00:12:09.722 "num_base_bdevs_operational": 3, 00:12:09.722 "base_bdevs_list": [ 00:12:09.722 { 00:12:09.722 "name": null, 00:12:09.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.722 "is_configured": false, 00:12:09.722 "data_offset": 0, 00:12:09.722 "data_size": 65536 00:12:09.722 }, 00:12:09.722 { 00:12:09.722 "name": "BaseBdev2", 00:12:09.722 "uuid": "fd052376-6b31-503e-bb44-c7e04c7a4c74", 00:12:09.722 "is_configured": true, 00:12:09.722 "data_offset": 0, 00:12:09.722 "data_size": 65536 00:12:09.722 }, 00:12:09.722 { 00:12:09.722 "name": "BaseBdev3", 00:12:09.722 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:09.722 "is_configured": true, 00:12:09.722 "data_offset": 0, 00:12:09.722 "data_size": 65536 00:12:09.722 }, 00:12:09.722 { 00:12:09.722 "name": "BaseBdev4", 00:12:09.722 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:09.722 "is_configured": true, 00:12:09.722 "data_offset": 0, 00:12:09.722 "data_size": 65536 00:12:09.722 } 00:12:09.723 ] 00:12:09.723 }' 00:12:09.723 15:28:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.723 15:28:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.982 15:28:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.982 15:28:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.982 15:28:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.982 [2024-11-26 15:28:08.273318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.982 [2024-11-26 15:28:08.277544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a180 00:12:09.982 15:28:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.982 15:28:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:09.982 [2024-11-26 15:28:08.279389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.920 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.920 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.920 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.920 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.920 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.920 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.921 "name": "raid_bdev1", 00:12:10.921 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:10.921 "strip_size_kb": 0, 00:12:10.921 "state": "online", 00:12:10.921 "raid_level": "raid1", 00:12:10.921 "superblock": false, 00:12:10.921 "num_base_bdevs": 4, 00:12:10.921 "num_base_bdevs_discovered": 4, 00:12:10.921 "num_base_bdevs_operational": 4, 00:12:10.921 "process": { 00:12:10.921 "type": "rebuild", 00:12:10.921 "target": "spare", 00:12:10.921 "progress": { 00:12:10.921 "blocks": 20480, 00:12:10.921 "percent": 31 00:12:10.921 } 00:12:10.921 }, 00:12:10.921 "base_bdevs_list": [ 00:12:10.921 { 00:12:10.921 "name": "spare", 00:12:10.921 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:10.921 "is_configured": true, 00:12:10.921 "data_offset": 0, 00:12:10.921 "data_size": 65536 00:12:10.921 }, 00:12:10.921 { 00:12:10.921 "name": "BaseBdev2", 00:12:10.921 "uuid": "fd052376-6b31-503e-bb44-c7e04c7a4c74", 00:12:10.921 "is_configured": true, 00:12:10.921 "data_offset": 0, 00:12:10.921 "data_size": 65536 00:12:10.921 }, 00:12:10.921 { 00:12:10.921 "name": "BaseBdev3", 00:12:10.921 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:10.921 "is_configured": true, 00:12:10.921 "data_offset": 0, 00:12:10.921 "data_size": 65536 00:12:10.921 }, 00:12:10.921 { 00:12:10.921 "name": "BaseBdev4", 00:12:10.921 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:10.921 "is_configured": true, 00:12:10.921 "data_offset": 0, 00:12:10.921 "data_size": 65536 00:12:10.921 } 00:12:10.921 ] 00:12:10.921 }' 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.921 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.180 [2024-11-26 15:28:09.434330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.180 [2024-11-26 15:28:09.486837] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:11.180 [2024-11-26 15:28:09.486915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.180 [2024-11-26 15:28:09.486933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.180 [2024-11-26 15:28:09.486945] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.180 "name": "raid_bdev1", 00:12:11.180 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:11.180 "strip_size_kb": 0, 00:12:11.180 "state": "online", 00:12:11.180 "raid_level": "raid1", 00:12:11.180 "superblock": false, 00:12:11.180 "num_base_bdevs": 4, 00:12:11.180 "num_base_bdevs_discovered": 3, 00:12:11.180 "num_base_bdevs_operational": 3, 00:12:11.180 "base_bdevs_list": [ 00:12:11.180 { 00:12:11.180 "name": null, 00:12:11.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.180 "is_configured": false, 00:12:11.180 "data_offset": 0, 00:12:11.180 "data_size": 65536 00:12:11.180 }, 00:12:11.180 { 00:12:11.180 "name": "BaseBdev2", 00:12:11.180 "uuid": "fd052376-6b31-503e-bb44-c7e04c7a4c74", 00:12:11.180 "is_configured": true, 00:12:11.180 "data_offset": 0, 00:12:11.180 "data_size": 65536 00:12:11.180 }, 00:12:11.180 { 00:12:11.180 "name": "BaseBdev3", 00:12:11.180 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:11.180 "is_configured": true, 00:12:11.180 "data_offset": 0, 00:12:11.180 "data_size": 65536 00:12:11.180 }, 00:12:11.180 { 00:12:11.180 "name": "BaseBdev4", 00:12:11.180 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:11.180 "is_configured": true, 00:12:11.180 "data_offset": 0, 00:12:11.180 "data_size": 65536 00:12:11.180 } 00:12:11.180 ] 00:12:11.180 }' 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.180 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.748 15:28:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.748 "name": "raid_bdev1", 00:12:11.748 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:11.748 "strip_size_kb": 0, 00:12:11.748 "state": "online", 00:12:11.748 "raid_level": "raid1", 00:12:11.748 "superblock": false, 00:12:11.748 "num_base_bdevs": 4, 00:12:11.748 "num_base_bdevs_discovered": 3, 00:12:11.748 "num_base_bdevs_operational": 3, 00:12:11.748 "base_bdevs_list": [ 00:12:11.748 { 00:12:11.748 "name": null, 00:12:11.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.748 "is_configured": false, 00:12:11.748 "data_offset": 0, 00:12:11.748 "data_size": 65536 00:12:11.748 }, 00:12:11.748 { 00:12:11.748 "name": "BaseBdev2", 00:12:11.748 "uuid": "fd052376-6b31-503e-bb44-c7e04c7a4c74", 00:12:11.748 "is_configured": true, 00:12:11.748 "data_offset": 0, 00:12:11.748 "data_size": 65536 00:12:11.748 }, 00:12:11.748 { 00:12:11.748 "name": "BaseBdev3", 00:12:11.748 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:11.748 "is_configured": true, 00:12:11.748 "data_offset": 0, 00:12:11.748 "data_size": 65536 00:12:11.748 }, 00:12:11.748 { 00:12:11.748 "name": "BaseBdev4", 00:12:11.748 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:11.748 "is_configured": true, 00:12:11.748 "data_offset": 0, 00:12:11.748 "data_size": 65536 00:12:11.748 } 00:12:11.748 ] 00:12:11.748 }' 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.748 [2024-11-26 15:28:10.087830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.748 [2024-11-26 15:28:10.092028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a250 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.748 15:28:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:11.748 [2024-11-26 15:28:10.093929] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.685 "name": "raid_bdev1", 00:12:12.685 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:12.685 "strip_size_kb": 0, 00:12:12.685 "state": "online", 00:12:12.685 "raid_level": "raid1", 00:12:12.685 "superblock": false, 00:12:12.685 "num_base_bdevs": 4, 00:12:12.685 "num_base_bdevs_discovered": 4, 00:12:12.685 "num_base_bdevs_operational": 4, 00:12:12.685 "process": { 00:12:12.685 "type": "rebuild", 00:12:12.685 "target": "spare", 00:12:12.685 "progress": { 00:12:12.685 "blocks": 20480, 00:12:12.685 "percent": 31 00:12:12.685 } 00:12:12.685 }, 00:12:12.685 "base_bdevs_list": [ 00:12:12.685 { 00:12:12.685 "name": "spare", 00:12:12.685 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:12.685 "is_configured": true, 00:12:12.685 "data_offset": 0, 00:12:12.685 "data_size": 65536 00:12:12.685 }, 00:12:12.685 { 00:12:12.685 "name": "BaseBdev2", 00:12:12.685 "uuid": "fd052376-6b31-503e-bb44-c7e04c7a4c74", 00:12:12.685 "is_configured": true, 00:12:12.685 "data_offset": 0, 00:12:12.685 "data_size": 65536 00:12:12.685 }, 00:12:12.685 { 00:12:12.685 "name": "BaseBdev3", 00:12:12.685 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:12.685 "is_configured": true, 00:12:12.685 "data_offset": 0, 00:12:12.685 "data_size": 65536 00:12:12.685 }, 00:12:12.685 { 00:12:12.685 "name": "BaseBdev4", 00:12:12.685 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:12.685 "is_configured": true, 00:12:12.685 "data_offset": 0, 00:12:12.685 "data_size": 65536 00:12:12.685 } 00:12:12.685 ] 00:12:12.685 }' 00:12:12.685 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:12.944 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.945 [2024-11-26 15:28:11.252771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.945 [2024-11-26 15:28:11.300708] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0a250 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.945 "name": "raid_bdev1", 00:12:12.945 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:12.945 "strip_size_kb": 0, 00:12:12.945 "state": "online", 00:12:12.945 "raid_level": "raid1", 00:12:12.945 "superblock": false, 00:12:12.945 "num_base_bdevs": 4, 00:12:12.945 "num_base_bdevs_discovered": 3, 00:12:12.945 "num_base_bdevs_operational": 3, 00:12:12.945 "process": { 00:12:12.945 "type": "rebuild", 00:12:12.945 "target": "spare", 00:12:12.945 "progress": { 00:12:12.945 "blocks": 24576, 00:12:12.945 "percent": 37 00:12:12.945 } 00:12:12.945 }, 00:12:12.945 "base_bdevs_list": [ 00:12:12.945 { 00:12:12.945 "name": "spare", 00:12:12.945 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:12.945 "is_configured": true, 00:12:12.945 "data_offset": 0, 00:12:12.945 "data_size": 65536 00:12:12.945 }, 00:12:12.945 { 00:12:12.945 "name": null, 00:12:12.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.945 "is_configured": false, 00:12:12.945 "data_offset": 0, 00:12:12.945 "data_size": 65536 00:12:12.945 }, 00:12:12.945 { 00:12:12.945 "name": "BaseBdev3", 00:12:12.945 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:12.945 "is_configured": true, 00:12:12.945 "data_offset": 0, 00:12:12.945 "data_size": 65536 00:12:12.945 }, 00:12:12.945 { 00:12:12.945 "name": "BaseBdev4", 00:12:12.945 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:12.945 "is_configured": true, 00:12:12.945 "data_offset": 0, 00:12:12.945 "data_size": 65536 00:12:12.945 } 00:12:12.945 ] 00:12:12.945 }' 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.945 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=350 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.204 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.204 "name": "raid_bdev1", 00:12:13.204 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:13.204 "strip_size_kb": 0, 00:12:13.204 "state": "online", 00:12:13.204 "raid_level": "raid1", 00:12:13.204 "superblock": false, 00:12:13.204 "num_base_bdevs": 4, 00:12:13.204 "num_base_bdevs_discovered": 3, 00:12:13.204 "num_base_bdevs_operational": 3, 00:12:13.204 "process": { 00:12:13.204 "type": "rebuild", 00:12:13.204 "target": "spare", 00:12:13.204 "progress": { 00:12:13.204 "blocks": 26624, 00:12:13.204 "percent": 40 00:12:13.204 } 00:12:13.204 }, 00:12:13.204 "base_bdevs_list": [ 00:12:13.204 { 00:12:13.204 "name": "spare", 00:12:13.204 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:13.204 "is_configured": true, 00:12:13.204 "data_offset": 0, 00:12:13.204 "data_size": 65536 00:12:13.204 }, 00:12:13.204 { 00:12:13.204 "name": null, 00:12:13.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.204 "is_configured": false, 00:12:13.205 "data_offset": 0, 00:12:13.205 "data_size": 65536 00:12:13.205 }, 00:12:13.205 { 00:12:13.205 "name": "BaseBdev3", 00:12:13.205 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:13.205 "is_configured": true, 00:12:13.205 "data_offset": 0, 00:12:13.205 "data_size": 65536 00:12:13.205 }, 00:12:13.205 { 00:12:13.205 "name": "BaseBdev4", 00:12:13.205 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:13.205 "is_configured": true, 00:12:13.205 "data_offset": 0, 00:12:13.205 "data_size": 65536 00:12:13.205 } 00:12:13.205 ] 00:12:13.205 }' 00:12:13.205 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.205 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.205 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.205 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.205 15:28:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.165 "name": "raid_bdev1", 00:12:14.165 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:14.165 "strip_size_kb": 0, 00:12:14.165 "state": "online", 00:12:14.165 "raid_level": "raid1", 00:12:14.165 "superblock": false, 00:12:14.165 "num_base_bdevs": 4, 00:12:14.165 "num_base_bdevs_discovered": 3, 00:12:14.165 "num_base_bdevs_operational": 3, 00:12:14.165 "process": { 00:12:14.165 "type": "rebuild", 00:12:14.165 "target": "spare", 00:12:14.165 "progress": { 00:12:14.165 "blocks": 49152, 00:12:14.165 "percent": 75 00:12:14.165 } 00:12:14.165 }, 00:12:14.165 "base_bdevs_list": [ 00:12:14.165 { 00:12:14.165 "name": "spare", 00:12:14.165 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:14.165 "is_configured": true, 00:12:14.165 "data_offset": 0, 00:12:14.165 "data_size": 65536 00:12:14.165 }, 00:12:14.165 { 00:12:14.165 "name": null, 00:12:14.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.165 "is_configured": false, 00:12:14.165 "data_offset": 0, 00:12:14.165 "data_size": 65536 00:12:14.165 }, 00:12:14.165 { 00:12:14.165 "name": "BaseBdev3", 00:12:14.165 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:14.165 "is_configured": true, 00:12:14.165 "data_offset": 0, 00:12:14.165 "data_size": 65536 00:12:14.165 }, 00:12:14.165 { 00:12:14.165 "name": "BaseBdev4", 00:12:14.165 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:14.165 "is_configured": true, 00:12:14.165 "data_offset": 0, 00:12:14.165 "data_size": 65536 00:12:14.165 } 00:12:14.165 ] 00:12:14.165 }' 00:12:14.165 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.424 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.424 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.424 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.424 15:28:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:14.993 [2024-11-26 15:28:13.312008] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:14.993 [2024-11-26 15:28:13.312112] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:14.993 [2024-11-26 15:28:13.312159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.562 "name": "raid_bdev1", 00:12:15.562 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:15.562 "strip_size_kb": 0, 00:12:15.562 "state": "online", 00:12:15.562 "raid_level": "raid1", 00:12:15.562 "superblock": false, 00:12:15.562 "num_base_bdevs": 4, 00:12:15.562 "num_base_bdevs_discovered": 3, 00:12:15.562 "num_base_bdevs_operational": 3, 00:12:15.562 "base_bdevs_list": [ 00:12:15.562 { 00:12:15.562 "name": "spare", 00:12:15.562 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:15.562 "is_configured": true, 00:12:15.562 "data_offset": 0, 00:12:15.562 "data_size": 65536 00:12:15.562 }, 00:12:15.562 { 00:12:15.562 "name": null, 00:12:15.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.562 "is_configured": false, 00:12:15.562 "data_offset": 0, 00:12:15.562 "data_size": 65536 00:12:15.562 }, 00:12:15.562 { 00:12:15.562 "name": "BaseBdev3", 00:12:15.562 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:15.562 "is_configured": true, 00:12:15.562 "data_offset": 0, 00:12:15.562 "data_size": 65536 00:12:15.562 }, 00:12:15.562 { 00:12:15.562 "name": "BaseBdev4", 00:12:15.562 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:15.562 "is_configured": true, 00:12:15.562 "data_offset": 0, 00:12:15.562 "data_size": 65536 00:12:15.562 } 00:12:15.562 ] 00:12:15.562 }' 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.562 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.563 "name": "raid_bdev1", 00:12:15.563 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:15.563 "strip_size_kb": 0, 00:12:15.563 "state": "online", 00:12:15.563 "raid_level": "raid1", 00:12:15.563 "superblock": false, 00:12:15.563 "num_base_bdevs": 4, 00:12:15.563 "num_base_bdevs_discovered": 3, 00:12:15.563 "num_base_bdevs_operational": 3, 00:12:15.563 "base_bdevs_list": [ 00:12:15.563 { 00:12:15.563 "name": "spare", 00:12:15.563 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:15.563 "is_configured": true, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 }, 00:12:15.563 { 00:12:15.563 "name": null, 00:12:15.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.563 "is_configured": false, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 }, 00:12:15.563 { 00:12:15.563 "name": "BaseBdev3", 00:12:15.563 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:15.563 "is_configured": true, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 }, 00:12:15.563 { 00:12:15.563 "name": "BaseBdev4", 00:12:15.563 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:15.563 "is_configured": true, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 } 00:12:15.563 ] 00:12:15.563 }' 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.563 15:28:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.563 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.563 "name": "raid_bdev1", 00:12:15.563 "uuid": "d5f6ccf6-602f-4a69-8657-6b2c7462e189", 00:12:15.563 "strip_size_kb": 0, 00:12:15.563 "state": "online", 00:12:15.563 "raid_level": "raid1", 00:12:15.563 "superblock": false, 00:12:15.563 "num_base_bdevs": 4, 00:12:15.563 "num_base_bdevs_discovered": 3, 00:12:15.563 "num_base_bdevs_operational": 3, 00:12:15.563 "base_bdevs_list": [ 00:12:15.563 { 00:12:15.563 "name": "spare", 00:12:15.563 "uuid": "dc604ab8-2613-51a3-9b1d-89e5f8c1b1f6", 00:12:15.563 "is_configured": true, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 }, 00:12:15.563 { 00:12:15.563 "name": null, 00:12:15.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.563 "is_configured": false, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 }, 00:12:15.563 { 00:12:15.563 "name": "BaseBdev3", 00:12:15.563 "uuid": "73c08c47-1d70-506a-a5bc-6f5e1a1877df", 00:12:15.563 "is_configured": true, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 }, 00:12:15.563 { 00:12:15.563 "name": "BaseBdev4", 00:12:15.563 "uuid": "326f508f-fd2c-5bb8-b1a2-5e180a832c12", 00:12:15.563 "is_configured": true, 00:12:15.563 "data_offset": 0, 00:12:15.563 "data_size": 65536 00:12:15.563 } 00:12:15.563 ] 00:12:15.563 }' 00:12:15.563 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.563 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.132 [2024-11-26 15:28:14.380890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.132 [2024-11-26 15:28:14.380927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.132 [2024-11-26 15:28:14.381017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.132 [2024-11-26 15:28:14.381121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.132 [2024-11-26 15:28:14.381135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:16.132 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:16.392 /dev/nbd0 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.392 1+0 records in 00:12:16.392 1+0 records out 00:12:16.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366422 s, 11.2 MB/s 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:16.392 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:16.651 /dev/nbd1 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:16.651 1+0 records in 00:12:16.651 1+0 records out 00:12:16.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400306 s, 10.2 MB/s 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:16.651 15:28:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:16.651 15:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:16.651 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.651 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:16.651 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:16.651 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:16.651 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.651 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.910 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 89717 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 89717 ']' 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 89717 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89717 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.169 killing process with pid 89717 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89717' 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 89717 00:12:17.169 Received shutdown signal, test time was about 60.000000 seconds 00:12:17.169 00:12:17.169 Latency(us) 00:12:17.169 [2024-11-26T15:28:15.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.169 [2024-11-26T15:28:15.648Z] =================================================================================================================== 00:12:17.169 [2024-11-26T15:28:15.648Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:17.169 [2024-11-26 15:28:15.505572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.169 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 89717 00:12:17.169 [2024-11-26 15:28:15.557609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:17.428 00:12:17.428 real 0m14.882s 00:12:17.428 user 0m17.305s 00:12:17.428 sys 0m2.734s 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.428 ************************************ 00:12:17.428 END TEST raid_rebuild_test 00:12:17.428 ************************************ 00:12:17.428 15:28:15 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:17.428 15:28:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:17.428 15:28:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.428 15:28:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.428 ************************************ 00:12:17.428 START TEST raid_rebuild_test_sb 00:12:17.428 ************************************ 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.428 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90135 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90135 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90135 ']' 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.429 15:28:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.687 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.687 Zero copy mechanism will not be used. 00:12:17.687 [2024-11-26 15:28:15.973874] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:17.687 [2024-11-26 15:28:15.974136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90135 ] 00:12:17.687 [2024-11-26 15:28:16.114950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:17.687 [2024-11-26 15:28:16.152276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.946 [2024-11-26 15:28:16.177545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.946 [2024-11-26 15:28:16.219744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.946 [2024-11-26 15:28:16.219785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.513 BaseBdev1_malloc 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.513 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.513 [2024-11-26 15:28:16.827939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.513 [2024-11-26 15:28:16.828024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.513 [2024-11-26 15:28:16.828064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.513 [2024-11-26 15:28:16.828081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.513 [2024-11-26 15:28:16.830585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.513 [2024-11-26 15:28:16.830629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.514 BaseBdev1 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 BaseBdev2_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 [2024-11-26 15:28:16.849248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.514 [2024-11-26 15:28:16.849314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.514 [2024-11-26 15:28:16.849337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.514 [2024-11-26 15:28:16.849348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.514 [2024-11-26 15:28:16.851568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.514 [2024-11-26 15:28:16.851607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.514 BaseBdev2 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 BaseBdev3_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 [2024-11-26 15:28:16.870025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:18.514 [2024-11-26 15:28:16.870083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.514 [2024-11-26 15:28:16.870105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:18.514 [2024-11-26 15:28:16.870116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.514 [2024-11-26 15:28:16.872122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.514 [2024-11-26 15:28:16.872162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:18.514 BaseBdev3 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 BaseBdev4_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 [2024-11-26 15:28:16.900527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:18.514 [2024-11-26 15:28:16.900587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.514 [2024-11-26 15:28:16.900614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:18.514 [2024-11-26 15:28:16.900628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.514 [2024-11-26 15:28:16.903135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.514 [2024-11-26 15:28:16.903201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:18.514 BaseBdev4 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 spare_malloc 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 spare_delay 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 [2024-11-26 15:28:16.929168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.514 [2024-11-26 15:28:16.929243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.514 [2024-11-26 15:28:16.929268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:18.514 [2024-11-26 15:28:16.929281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.514 [2024-11-26 15:28:16.931352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.514 [2024-11-26 15:28:16.931392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.514 spare 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 [2024-11-26 15:28:16.937255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.514 [2024-11-26 15:28:16.939085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.514 [2024-11-26 15:28:16.939151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.514 [2024-11-26 15:28:16.939214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:18.514 [2024-11-26 15:28:16.939401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:18.514 [2024-11-26 15:28:16.939420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.514 [2024-11-26 15:28:16.939694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:18.514 [2024-11-26 15:28:16.939867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:18.514 [2024-11-26 15:28:16.939886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:18.514 [2024-11-26 15:28:16.940010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.514 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.514 "name": "raid_bdev1", 00:12:18.514 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:18.514 "strip_size_kb": 0, 00:12:18.514 "state": "online", 00:12:18.514 "raid_level": "raid1", 00:12:18.514 "superblock": true, 00:12:18.514 "num_base_bdevs": 4, 00:12:18.514 "num_base_bdevs_discovered": 4, 00:12:18.514 "num_base_bdevs_operational": 4, 00:12:18.514 "base_bdevs_list": [ 00:12:18.514 { 00:12:18.514 "name": "BaseBdev1", 00:12:18.514 "uuid": "90d7222b-3465-57cd-843a-d633eb5f017c", 00:12:18.514 "is_configured": true, 00:12:18.514 "data_offset": 2048, 00:12:18.514 "data_size": 63488 00:12:18.514 }, 00:12:18.514 { 00:12:18.514 "name": "BaseBdev2", 00:12:18.514 "uuid": "021fd190-e7d8-54c9-87ee-a599fd865278", 00:12:18.514 "is_configured": true, 00:12:18.514 "data_offset": 2048, 00:12:18.514 "data_size": 63488 00:12:18.514 }, 00:12:18.514 { 00:12:18.514 "name": "BaseBdev3", 00:12:18.514 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:18.514 "is_configured": true, 00:12:18.514 "data_offset": 2048, 00:12:18.514 "data_size": 63488 00:12:18.514 }, 00:12:18.515 { 00:12:18.515 "name": "BaseBdev4", 00:12:18.515 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:18.515 "is_configured": true, 00:12:18.515 "data_offset": 2048, 00:12:18.515 "data_size": 63488 00:12:18.515 } 00:12:18.515 ] 00:12:18.515 }' 00:12:18.515 15:28:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.515 15:28:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.081 [2024-11-26 15:28:17.409756] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.081 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.340 [2024-11-26 15:28:17.681522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:19.341 /dev/nbd0 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.341 1+0 records in 00:12:19.341 1+0 records out 00:12:19.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406005 s, 10.1 MB/s 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:19.341 15:28:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:24.613 63488+0 records in 00:12:24.613 63488+0 records out 00:12:24.613 32505856 bytes (33 MB, 31 MiB) copied, 5.07762 s, 6.4 MB/s 00:12:24.613 15:28:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:24.613 15:28:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.613 15:28:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:24.613 15:28:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.613 15:28:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:24.613 15:28:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.613 15:28:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:24.613 [2024-11-26 15:28:23.038944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.613 [2024-11-26 15:28:23.075043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.613 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.872 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.872 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.872 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.872 "name": "raid_bdev1", 00:12:24.872 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:24.872 "strip_size_kb": 0, 00:12:24.872 "state": "online", 00:12:24.872 "raid_level": "raid1", 00:12:24.872 "superblock": true, 00:12:24.872 "num_base_bdevs": 4, 00:12:24.872 "num_base_bdevs_discovered": 3, 00:12:24.872 "num_base_bdevs_operational": 3, 00:12:24.872 "base_bdevs_list": [ 00:12:24.872 { 00:12:24.872 "name": null, 00:12:24.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.872 "is_configured": false, 00:12:24.872 "data_offset": 0, 00:12:24.872 "data_size": 63488 00:12:24.872 }, 00:12:24.872 { 00:12:24.872 "name": "BaseBdev2", 00:12:24.872 "uuid": "021fd190-e7d8-54c9-87ee-a599fd865278", 00:12:24.872 "is_configured": true, 00:12:24.872 "data_offset": 2048, 00:12:24.872 "data_size": 63488 00:12:24.872 }, 00:12:24.872 { 00:12:24.872 "name": "BaseBdev3", 00:12:24.872 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:24.872 "is_configured": true, 00:12:24.872 "data_offset": 2048, 00:12:24.872 "data_size": 63488 00:12:24.872 }, 00:12:24.872 { 00:12:24.872 "name": "BaseBdev4", 00:12:24.872 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:24.872 "is_configured": true, 00:12:24.872 "data_offset": 2048, 00:12:24.872 "data_size": 63488 00:12:24.872 } 00:12:24.872 ] 00:12:24.872 }' 00:12:24.872 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.872 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.131 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.131 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.131 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.131 [2024-11-26 15:28:23.547193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.131 [2024-11-26 15:28:23.551433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3910 00:12:25.131 15:28:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.131 15:28:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:25.131 [2024-11-26 15:28:23.553317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.510 "name": "raid_bdev1", 00:12:26.510 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:26.510 "strip_size_kb": 0, 00:12:26.510 "state": "online", 00:12:26.510 "raid_level": "raid1", 00:12:26.510 "superblock": true, 00:12:26.510 "num_base_bdevs": 4, 00:12:26.510 "num_base_bdevs_discovered": 4, 00:12:26.510 "num_base_bdevs_operational": 4, 00:12:26.510 "process": { 00:12:26.510 "type": "rebuild", 00:12:26.510 "target": "spare", 00:12:26.510 "progress": { 00:12:26.510 "blocks": 20480, 00:12:26.510 "percent": 32 00:12:26.510 } 00:12:26.510 }, 00:12:26.510 "base_bdevs_list": [ 00:12:26.510 { 00:12:26.510 "name": "spare", 00:12:26.510 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:26.510 "is_configured": true, 00:12:26.510 "data_offset": 2048, 00:12:26.510 "data_size": 63488 00:12:26.510 }, 00:12:26.510 { 00:12:26.510 "name": "BaseBdev2", 00:12:26.510 "uuid": "021fd190-e7d8-54c9-87ee-a599fd865278", 00:12:26.510 "is_configured": true, 00:12:26.510 "data_offset": 2048, 00:12:26.510 "data_size": 63488 00:12:26.510 }, 00:12:26.510 { 00:12:26.510 "name": "BaseBdev3", 00:12:26.510 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:26.510 "is_configured": true, 00:12:26.510 "data_offset": 2048, 00:12:26.510 "data_size": 63488 00:12:26.510 }, 00:12:26.510 { 00:12:26.510 "name": "BaseBdev4", 00:12:26.510 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:26.510 "is_configured": true, 00:12:26.510 "data_offset": 2048, 00:12:26.510 "data_size": 63488 00:12:26.510 } 00:12:26.510 ] 00:12:26.510 }' 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.510 [2024-11-26 15:28:24.712355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.510 [2024-11-26 15:28:24.760320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:26.510 [2024-11-26 15:28:24.760406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.510 [2024-11-26 15:28:24.760423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.510 [2024-11-26 15:28:24.760441] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.510 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.510 "name": "raid_bdev1", 00:12:26.510 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:26.510 "strip_size_kb": 0, 00:12:26.510 "state": "online", 00:12:26.510 "raid_level": "raid1", 00:12:26.510 "superblock": true, 00:12:26.510 "num_base_bdevs": 4, 00:12:26.510 "num_base_bdevs_discovered": 3, 00:12:26.510 "num_base_bdevs_operational": 3, 00:12:26.510 "base_bdevs_list": [ 00:12:26.510 { 00:12:26.510 "name": null, 00:12:26.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.510 "is_configured": false, 00:12:26.510 "data_offset": 0, 00:12:26.510 "data_size": 63488 00:12:26.510 }, 00:12:26.510 { 00:12:26.510 "name": "BaseBdev2", 00:12:26.510 "uuid": "021fd190-e7d8-54c9-87ee-a599fd865278", 00:12:26.511 "is_configured": true, 00:12:26.511 "data_offset": 2048, 00:12:26.511 "data_size": 63488 00:12:26.511 }, 00:12:26.511 { 00:12:26.511 "name": "BaseBdev3", 00:12:26.511 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:26.511 "is_configured": true, 00:12:26.511 "data_offset": 2048, 00:12:26.511 "data_size": 63488 00:12:26.511 }, 00:12:26.511 { 00:12:26.511 "name": "BaseBdev4", 00:12:26.511 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:26.511 "is_configured": true, 00:12:26.511 "data_offset": 2048, 00:12:26.511 "data_size": 63488 00:12:26.511 } 00:12:26.511 ] 00:12:26.511 }' 00:12:26.511 15:28:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.511 15:28:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.770 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.770 "name": "raid_bdev1", 00:12:26.770 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:26.770 "strip_size_kb": 0, 00:12:26.770 "state": "online", 00:12:26.770 "raid_level": "raid1", 00:12:26.770 "superblock": true, 00:12:26.770 "num_base_bdevs": 4, 00:12:26.770 "num_base_bdevs_discovered": 3, 00:12:26.770 "num_base_bdevs_operational": 3, 00:12:26.770 "base_bdevs_list": [ 00:12:26.770 { 00:12:26.770 "name": null, 00:12:26.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.770 "is_configured": false, 00:12:26.770 "data_offset": 0, 00:12:26.770 "data_size": 63488 00:12:26.770 }, 00:12:26.770 { 00:12:26.770 "name": "BaseBdev2", 00:12:26.770 "uuid": "021fd190-e7d8-54c9-87ee-a599fd865278", 00:12:26.770 "is_configured": true, 00:12:26.770 "data_offset": 2048, 00:12:26.770 "data_size": 63488 00:12:26.770 }, 00:12:26.770 { 00:12:26.770 "name": "BaseBdev3", 00:12:26.770 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:26.770 "is_configured": true, 00:12:26.770 "data_offset": 2048, 00:12:26.770 "data_size": 63488 00:12:26.770 }, 00:12:26.770 { 00:12:26.770 "name": "BaseBdev4", 00:12:26.770 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:26.770 "is_configured": true, 00:12:26.770 "data_offset": 2048, 00:12:26.770 "data_size": 63488 00:12:26.770 } 00:12:26.770 ] 00:12:26.770 }' 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.030 [2024-11-26 15:28:25.345195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.030 [2024-11-26 15:28:25.349324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca39e0 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.030 15:28:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:27.030 [2024-11-26 15:28:25.351125] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.970 "name": "raid_bdev1", 00:12:27.970 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:27.970 "strip_size_kb": 0, 00:12:27.970 "state": "online", 00:12:27.970 "raid_level": "raid1", 00:12:27.970 "superblock": true, 00:12:27.970 "num_base_bdevs": 4, 00:12:27.970 "num_base_bdevs_discovered": 4, 00:12:27.970 "num_base_bdevs_operational": 4, 00:12:27.970 "process": { 00:12:27.970 "type": "rebuild", 00:12:27.970 "target": "spare", 00:12:27.970 "progress": { 00:12:27.970 "blocks": 20480, 00:12:27.970 "percent": 32 00:12:27.970 } 00:12:27.970 }, 00:12:27.970 "base_bdevs_list": [ 00:12:27.970 { 00:12:27.970 "name": "spare", 00:12:27.970 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:27.970 "is_configured": true, 00:12:27.970 "data_offset": 2048, 00:12:27.970 "data_size": 63488 00:12:27.970 }, 00:12:27.970 { 00:12:27.970 "name": "BaseBdev2", 00:12:27.970 "uuid": "021fd190-e7d8-54c9-87ee-a599fd865278", 00:12:27.970 "is_configured": true, 00:12:27.970 "data_offset": 2048, 00:12:27.970 "data_size": 63488 00:12:27.970 }, 00:12:27.970 { 00:12:27.970 "name": "BaseBdev3", 00:12:27.970 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:27.970 "is_configured": true, 00:12:27.970 "data_offset": 2048, 00:12:27.970 "data_size": 63488 00:12:27.970 }, 00:12:27.970 { 00:12:27.970 "name": "BaseBdev4", 00:12:27.970 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:27.970 "is_configured": true, 00:12:27.970 "data_offset": 2048, 00:12:27.970 "data_size": 63488 00:12:27.970 } 00:12:27.970 ] 00:12:27.970 }' 00:12:27.970 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.256 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.256 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.256 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:28.257 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.257 [2024-11-26 15:28:26.510400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:28.257 [2024-11-26 15:28:26.657491] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca39e0 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.257 "name": "raid_bdev1", 00:12:28.257 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:28.257 "strip_size_kb": 0, 00:12:28.257 "state": "online", 00:12:28.257 "raid_level": "raid1", 00:12:28.257 "superblock": true, 00:12:28.257 "num_base_bdevs": 4, 00:12:28.257 "num_base_bdevs_discovered": 3, 00:12:28.257 "num_base_bdevs_operational": 3, 00:12:28.257 "process": { 00:12:28.257 "type": "rebuild", 00:12:28.257 "target": "spare", 00:12:28.257 "progress": { 00:12:28.257 "blocks": 24576, 00:12:28.257 "percent": 38 00:12:28.257 } 00:12:28.257 }, 00:12:28.257 "base_bdevs_list": [ 00:12:28.257 { 00:12:28.257 "name": "spare", 00:12:28.257 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:28.257 "is_configured": true, 00:12:28.257 "data_offset": 2048, 00:12:28.257 "data_size": 63488 00:12:28.257 }, 00:12:28.257 { 00:12:28.257 "name": null, 00:12:28.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.257 "is_configured": false, 00:12:28.257 "data_offset": 0, 00:12:28.257 "data_size": 63488 00:12:28.257 }, 00:12:28.257 { 00:12:28.257 "name": "BaseBdev3", 00:12:28.257 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:28.257 "is_configured": true, 00:12:28.257 "data_offset": 2048, 00:12:28.257 "data_size": 63488 00:12:28.257 }, 00:12:28.257 { 00:12:28.257 "name": "BaseBdev4", 00:12:28.257 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:28.257 "is_configured": true, 00:12:28.257 "data_offset": 2048, 00:12:28.257 "data_size": 63488 00:12:28.257 } 00:12:28.257 ] 00:12:28.257 }' 00:12:28.257 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=365 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.517 "name": "raid_bdev1", 00:12:28.517 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:28.517 "strip_size_kb": 0, 00:12:28.517 "state": "online", 00:12:28.517 "raid_level": "raid1", 00:12:28.517 "superblock": true, 00:12:28.517 "num_base_bdevs": 4, 00:12:28.517 "num_base_bdevs_discovered": 3, 00:12:28.517 "num_base_bdevs_operational": 3, 00:12:28.517 "process": { 00:12:28.517 "type": "rebuild", 00:12:28.517 "target": "spare", 00:12:28.517 "progress": { 00:12:28.517 "blocks": 26624, 00:12:28.517 "percent": 41 00:12:28.517 } 00:12:28.517 }, 00:12:28.517 "base_bdevs_list": [ 00:12:28.517 { 00:12:28.517 "name": "spare", 00:12:28.517 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:28.517 "is_configured": true, 00:12:28.517 "data_offset": 2048, 00:12:28.517 "data_size": 63488 00:12:28.517 }, 00:12:28.517 { 00:12:28.517 "name": null, 00:12:28.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.517 "is_configured": false, 00:12:28.517 "data_offset": 0, 00:12:28.517 "data_size": 63488 00:12:28.517 }, 00:12:28.517 { 00:12:28.517 "name": "BaseBdev3", 00:12:28.517 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:28.517 "is_configured": true, 00:12:28.517 "data_offset": 2048, 00:12:28.517 "data_size": 63488 00:12:28.517 }, 00:12:28.517 { 00:12:28.517 "name": "BaseBdev4", 00:12:28.517 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:28.517 "is_configured": true, 00:12:28.517 "data_offset": 2048, 00:12:28.517 "data_size": 63488 00:12:28.517 } 00:12:28.517 ] 00:12:28.517 }' 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.517 15:28:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.899 "name": "raid_bdev1", 00:12:29.899 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:29.899 "strip_size_kb": 0, 00:12:29.899 "state": "online", 00:12:29.899 "raid_level": "raid1", 00:12:29.899 "superblock": true, 00:12:29.899 "num_base_bdevs": 4, 00:12:29.899 "num_base_bdevs_discovered": 3, 00:12:29.899 "num_base_bdevs_operational": 3, 00:12:29.899 "process": { 00:12:29.899 "type": "rebuild", 00:12:29.899 "target": "spare", 00:12:29.899 "progress": { 00:12:29.899 "blocks": 49152, 00:12:29.899 "percent": 77 00:12:29.899 } 00:12:29.899 }, 00:12:29.899 "base_bdevs_list": [ 00:12:29.899 { 00:12:29.899 "name": "spare", 00:12:29.899 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:29.899 "is_configured": true, 00:12:29.899 "data_offset": 2048, 00:12:29.899 "data_size": 63488 00:12:29.899 }, 00:12:29.899 { 00:12:29.899 "name": null, 00:12:29.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.899 "is_configured": false, 00:12:29.899 "data_offset": 0, 00:12:29.899 "data_size": 63488 00:12:29.899 }, 00:12:29.899 { 00:12:29.899 "name": "BaseBdev3", 00:12:29.899 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:29.899 "is_configured": true, 00:12:29.899 "data_offset": 2048, 00:12:29.899 "data_size": 63488 00:12:29.899 }, 00:12:29.899 { 00:12:29.899 "name": "BaseBdev4", 00:12:29.899 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:29.899 "is_configured": true, 00:12:29.899 "data_offset": 2048, 00:12:29.899 "data_size": 63488 00:12:29.899 } 00:12:29.899 ] 00:12:29.899 }' 00:12:29.899 15:28:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.899 15:28:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.899 15:28:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.899 15:28:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.899 15:28:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:30.159 [2024-11-26 15:28:28.568260] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:30.159 [2024-11-26 15:28:28.568349] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:30.159 [2024-11-26 15:28:28.568471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.729 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.729 "name": "raid_bdev1", 00:12:30.729 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:30.729 "strip_size_kb": 0, 00:12:30.729 "state": "online", 00:12:30.729 "raid_level": "raid1", 00:12:30.729 "superblock": true, 00:12:30.729 "num_base_bdevs": 4, 00:12:30.729 "num_base_bdevs_discovered": 3, 00:12:30.729 "num_base_bdevs_operational": 3, 00:12:30.729 "base_bdevs_list": [ 00:12:30.729 { 00:12:30.729 "name": "spare", 00:12:30.729 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:30.729 "is_configured": true, 00:12:30.729 "data_offset": 2048, 00:12:30.729 "data_size": 63488 00:12:30.729 }, 00:12:30.729 { 00:12:30.729 "name": null, 00:12:30.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.729 "is_configured": false, 00:12:30.729 "data_offset": 0, 00:12:30.729 "data_size": 63488 00:12:30.729 }, 00:12:30.729 { 00:12:30.729 "name": "BaseBdev3", 00:12:30.729 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:30.729 "is_configured": true, 00:12:30.729 "data_offset": 2048, 00:12:30.729 "data_size": 63488 00:12:30.729 }, 00:12:30.729 { 00:12:30.729 "name": "BaseBdev4", 00:12:30.729 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:30.729 "is_configured": true, 00:12:30.729 "data_offset": 2048, 00:12:30.729 "data_size": 63488 00:12:30.729 } 00:12:30.729 ] 00:12:30.730 }' 00:12:30.730 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.730 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:30.730 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.990 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.990 "name": "raid_bdev1", 00:12:30.990 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:30.990 "strip_size_kb": 0, 00:12:30.990 "state": "online", 00:12:30.990 "raid_level": "raid1", 00:12:30.990 "superblock": true, 00:12:30.990 "num_base_bdevs": 4, 00:12:30.990 "num_base_bdevs_discovered": 3, 00:12:30.990 "num_base_bdevs_operational": 3, 00:12:30.990 "base_bdevs_list": [ 00:12:30.990 { 00:12:30.990 "name": "spare", 00:12:30.990 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:30.990 "is_configured": true, 00:12:30.990 "data_offset": 2048, 00:12:30.990 "data_size": 63488 00:12:30.990 }, 00:12:30.990 { 00:12:30.990 "name": null, 00:12:30.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.990 "is_configured": false, 00:12:30.990 "data_offset": 0, 00:12:30.990 "data_size": 63488 00:12:30.990 }, 00:12:30.990 { 00:12:30.990 "name": "BaseBdev3", 00:12:30.990 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:30.990 "is_configured": true, 00:12:30.990 "data_offset": 2048, 00:12:30.990 "data_size": 63488 00:12:30.990 }, 00:12:30.990 { 00:12:30.990 "name": "BaseBdev4", 00:12:30.991 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:30.991 "is_configured": true, 00:12:30.991 "data_offset": 2048, 00:12:30.991 "data_size": 63488 00:12:30.991 } 00:12:30.991 ] 00:12:30.991 }' 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.991 "name": "raid_bdev1", 00:12:30.991 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:30.991 "strip_size_kb": 0, 00:12:30.991 "state": "online", 00:12:30.991 "raid_level": "raid1", 00:12:30.991 "superblock": true, 00:12:30.991 "num_base_bdevs": 4, 00:12:30.991 "num_base_bdevs_discovered": 3, 00:12:30.991 "num_base_bdevs_operational": 3, 00:12:30.991 "base_bdevs_list": [ 00:12:30.991 { 00:12:30.991 "name": "spare", 00:12:30.991 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:30.991 "is_configured": true, 00:12:30.991 "data_offset": 2048, 00:12:30.991 "data_size": 63488 00:12:30.991 }, 00:12:30.991 { 00:12:30.991 "name": null, 00:12:30.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.991 "is_configured": false, 00:12:30.991 "data_offset": 0, 00:12:30.991 "data_size": 63488 00:12:30.991 }, 00:12:30.991 { 00:12:30.991 "name": "BaseBdev3", 00:12:30.991 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:30.991 "is_configured": true, 00:12:30.991 "data_offset": 2048, 00:12:30.991 "data_size": 63488 00:12:30.991 }, 00:12:30.991 { 00:12:30.991 "name": "BaseBdev4", 00:12:30.991 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:30.991 "is_configured": true, 00:12:30.991 "data_offset": 2048, 00:12:30.991 "data_size": 63488 00:12:30.991 } 00:12:30.991 ] 00:12:30.991 }' 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.991 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.562 [2024-11-26 15:28:29.821150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.562 [2024-11-26 15:28:29.821197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.562 [2024-11-26 15:28:29.821297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.562 [2024-11-26 15:28:29.821390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.562 [2024-11-26 15:28:29.821404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.562 15:28:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:31.822 /dev/nbd0 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.822 1+0 records in 00:12:31.822 1+0 records out 00:12:31.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324131 s, 12.6 MB/s 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.822 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:32.082 /dev/nbd1 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.082 1+0 records in 00:12:32.082 1+0 records out 00:12:32.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285645 s, 14.3 MB/s 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.082 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.343 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.604 [2024-11-26 15:28:30.952108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.604 [2024-11-26 15:28:30.952172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.604 [2024-11-26 15:28:30.952214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:32.604 [2024-11-26 15:28:30.952223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.604 [2024-11-26 15:28:30.954417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.604 [2024-11-26 15:28:30.954456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.604 [2024-11-26 15:28:30.954540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:32.604 [2024-11-26 15:28:30.954580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.604 [2024-11-26 15:28:30.954712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.604 [2024-11-26 15:28:30.954826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.604 spare 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.604 15:28:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.604 [2024-11-26 15:28:31.054891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.604 [2024-11-26 15:28:31.054922] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.604 [2024-11-26 15:28:31.055231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:12:32.604 [2024-11-26 15:28:31.055390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.604 [2024-11-26 15:28:31.055414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:32.604 [2024-11-26 15:28:31.055565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.604 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.865 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.865 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.865 "name": "raid_bdev1", 00:12:32.865 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:32.865 "strip_size_kb": 0, 00:12:32.865 "state": "online", 00:12:32.865 "raid_level": "raid1", 00:12:32.865 "superblock": true, 00:12:32.865 "num_base_bdevs": 4, 00:12:32.865 "num_base_bdevs_discovered": 3, 00:12:32.865 "num_base_bdevs_operational": 3, 00:12:32.865 "base_bdevs_list": [ 00:12:32.865 { 00:12:32.865 "name": "spare", 00:12:32.865 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:32.865 "is_configured": true, 00:12:32.865 "data_offset": 2048, 00:12:32.865 "data_size": 63488 00:12:32.865 }, 00:12:32.865 { 00:12:32.865 "name": null, 00:12:32.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.865 "is_configured": false, 00:12:32.865 "data_offset": 2048, 00:12:32.865 "data_size": 63488 00:12:32.865 }, 00:12:32.865 { 00:12:32.865 "name": "BaseBdev3", 00:12:32.865 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:32.865 "is_configured": true, 00:12:32.865 "data_offset": 2048, 00:12:32.865 "data_size": 63488 00:12:32.865 }, 00:12:32.865 { 00:12:32.865 "name": "BaseBdev4", 00:12:32.865 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:32.865 "is_configured": true, 00:12:32.865 "data_offset": 2048, 00:12:32.865 "data_size": 63488 00:12:32.865 } 00:12:32.865 ] 00:12:32.865 }' 00:12:32.865 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.865 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.123 "name": "raid_bdev1", 00:12:33.123 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:33.123 "strip_size_kb": 0, 00:12:33.123 "state": "online", 00:12:33.123 "raid_level": "raid1", 00:12:33.123 "superblock": true, 00:12:33.123 "num_base_bdevs": 4, 00:12:33.123 "num_base_bdevs_discovered": 3, 00:12:33.123 "num_base_bdevs_operational": 3, 00:12:33.123 "base_bdevs_list": [ 00:12:33.123 { 00:12:33.123 "name": "spare", 00:12:33.123 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:33.123 "is_configured": true, 00:12:33.123 "data_offset": 2048, 00:12:33.123 "data_size": 63488 00:12:33.123 }, 00:12:33.123 { 00:12:33.123 "name": null, 00:12:33.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.123 "is_configured": false, 00:12:33.123 "data_offset": 2048, 00:12:33.123 "data_size": 63488 00:12:33.123 }, 00:12:33.123 { 00:12:33.123 "name": "BaseBdev3", 00:12:33.123 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:33.123 "is_configured": true, 00:12:33.123 "data_offset": 2048, 00:12:33.123 "data_size": 63488 00:12:33.123 }, 00:12:33.123 { 00:12:33.123 "name": "BaseBdev4", 00:12:33.123 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:33.123 "is_configured": true, 00:12:33.123 "data_offset": 2048, 00:12:33.123 "data_size": 63488 00:12:33.123 } 00:12:33.123 ] 00:12:33.123 }' 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.123 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.383 [2024-11-26 15:28:31.672369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.383 "name": "raid_bdev1", 00:12:33.383 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:33.383 "strip_size_kb": 0, 00:12:33.383 "state": "online", 00:12:33.383 "raid_level": "raid1", 00:12:33.383 "superblock": true, 00:12:33.383 "num_base_bdevs": 4, 00:12:33.383 "num_base_bdevs_discovered": 2, 00:12:33.383 "num_base_bdevs_operational": 2, 00:12:33.383 "base_bdevs_list": [ 00:12:33.383 { 00:12:33.383 "name": null, 00:12:33.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.383 "is_configured": false, 00:12:33.383 "data_offset": 0, 00:12:33.383 "data_size": 63488 00:12:33.383 }, 00:12:33.383 { 00:12:33.383 "name": null, 00:12:33.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.383 "is_configured": false, 00:12:33.383 "data_offset": 2048, 00:12:33.383 "data_size": 63488 00:12:33.383 }, 00:12:33.383 { 00:12:33.383 "name": "BaseBdev3", 00:12:33.383 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:33.383 "is_configured": true, 00:12:33.383 "data_offset": 2048, 00:12:33.383 "data_size": 63488 00:12:33.383 }, 00:12:33.383 { 00:12:33.383 "name": "BaseBdev4", 00:12:33.383 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:33.383 "is_configured": true, 00:12:33.383 "data_offset": 2048, 00:12:33.383 "data_size": 63488 00:12:33.383 } 00:12:33.383 ] 00:12:33.383 }' 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.383 15:28:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.643 15:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.643 15:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.643 15:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.643 [2024-11-26 15:28:32.076550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.643 [2024-11-26 15:28:32.076774] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:33.643 [2024-11-26 15:28:32.076793] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:33.643 [2024-11-26 15:28:32.076848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.643 [2024-11-26 15:28:32.080892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2160 00:12:33.643 15:28:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.643 15:28:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:33.643 [2024-11-26 15:28:32.082892] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.025 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.025 "name": "raid_bdev1", 00:12:35.025 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:35.025 "strip_size_kb": 0, 00:12:35.025 "state": "online", 00:12:35.025 "raid_level": "raid1", 00:12:35.025 "superblock": true, 00:12:35.025 "num_base_bdevs": 4, 00:12:35.025 "num_base_bdevs_discovered": 3, 00:12:35.025 "num_base_bdevs_operational": 3, 00:12:35.025 "process": { 00:12:35.025 "type": "rebuild", 00:12:35.025 "target": "spare", 00:12:35.025 "progress": { 00:12:35.025 "blocks": 20480, 00:12:35.025 "percent": 32 00:12:35.025 } 00:12:35.025 }, 00:12:35.025 "base_bdevs_list": [ 00:12:35.025 { 00:12:35.025 "name": "spare", 00:12:35.025 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:35.025 "is_configured": true, 00:12:35.025 "data_offset": 2048, 00:12:35.025 "data_size": 63488 00:12:35.025 }, 00:12:35.025 { 00:12:35.025 "name": null, 00:12:35.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.025 "is_configured": false, 00:12:35.026 "data_offset": 2048, 00:12:35.026 "data_size": 63488 00:12:35.026 }, 00:12:35.026 { 00:12:35.026 "name": "BaseBdev3", 00:12:35.026 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:35.026 "is_configured": true, 00:12:35.026 "data_offset": 2048, 00:12:35.026 "data_size": 63488 00:12:35.026 }, 00:12:35.026 { 00:12:35.026 "name": "BaseBdev4", 00:12:35.026 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:35.026 "is_configured": true, 00:12:35.026 "data_offset": 2048, 00:12:35.026 "data_size": 63488 00:12:35.026 } 00:12:35.026 ] 00:12:35.026 }' 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.026 [2024-11-26 15:28:33.221747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.026 [2024-11-26 15:28:33.289409] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.026 [2024-11-26 15:28:33.289471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.026 [2024-11-26 15:28:33.289490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.026 [2024-11-26 15:28:33.289497] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.026 "name": "raid_bdev1", 00:12:35.026 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:35.026 "strip_size_kb": 0, 00:12:35.026 "state": "online", 00:12:35.026 "raid_level": "raid1", 00:12:35.026 "superblock": true, 00:12:35.026 "num_base_bdevs": 4, 00:12:35.026 "num_base_bdevs_discovered": 2, 00:12:35.026 "num_base_bdevs_operational": 2, 00:12:35.026 "base_bdevs_list": [ 00:12:35.026 { 00:12:35.026 "name": null, 00:12:35.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.026 "is_configured": false, 00:12:35.026 "data_offset": 0, 00:12:35.026 "data_size": 63488 00:12:35.026 }, 00:12:35.026 { 00:12:35.026 "name": null, 00:12:35.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.026 "is_configured": false, 00:12:35.026 "data_offset": 2048, 00:12:35.026 "data_size": 63488 00:12:35.026 }, 00:12:35.026 { 00:12:35.026 "name": "BaseBdev3", 00:12:35.026 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:35.026 "is_configured": true, 00:12:35.026 "data_offset": 2048, 00:12:35.026 "data_size": 63488 00:12:35.026 }, 00:12:35.026 { 00:12:35.026 "name": "BaseBdev4", 00:12:35.026 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:35.026 "is_configured": true, 00:12:35.026 "data_offset": 2048, 00:12:35.026 "data_size": 63488 00:12:35.026 } 00:12:35.026 ] 00:12:35.026 }' 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.026 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.286 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:35.286 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.286 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.287 [2024-11-26 15:28:33.681965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:35.287 [2024-11-26 15:28:33.682030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.287 [2024-11-26 15:28:33.682079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:35.287 [2024-11-26 15:28:33.682093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.287 [2024-11-26 15:28:33.682540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.287 [2024-11-26 15:28:33.682566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:35.287 [2024-11-26 15:28:33.682664] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:35.287 [2024-11-26 15:28:33.682682] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:35.287 [2024-11-26 15:28:33.682693] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:35.287 [2024-11-26 15:28:33.682714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.287 [2024-11-26 15:28:33.686686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:12:35.287 spare 00:12:35.287 15:28:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.287 15:28:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:35.287 [2024-11-26 15:28:33.688553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.235 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.506 "name": "raid_bdev1", 00:12:36.506 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:36.506 "strip_size_kb": 0, 00:12:36.506 "state": "online", 00:12:36.506 "raid_level": "raid1", 00:12:36.506 "superblock": true, 00:12:36.506 "num_base_bdevs": 4, 00:12:36.506 "num_base_bdevs_discovered": 3, 00:12:36.506 "num_base_bdevs_operational": 3, 00:12:36.506 "process": { 00:12:36.506 "type": "rebuild", 00:12:36.506 "target": "spare", 00:12:36.506 "progress": { 00:12:36.506 "blocks": 20480, 00:12:36.506 "percent": 32 00:12:36.506 } 00:12:36.506 }, 00:12:36.506 "base_bdevs_list": [ 00:12:36.506 { 00:12:36.506 "name": "spare", 00:12:36.506 "uuid": "d97dd138-e704-5129-bcc2-ce09bc557912", 00:12:36.506 "is_configured": true, 00:12:36.506 "data_offset": 2048, 00:12:36.506 "data_size": 63488 00:12:36.506 }, 00:12:36.506 { 00:12:36.506 "name": null, 00:12:36.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.506 "is_configured": false, 00:12:36.506 "data_offset": 2048, 00:12:36.506 "data_size": 63488 00:12:36.506 }, 00:12:36.506 { 00:12:36.506 "name": "BaseBdev3", 00:12:36.506 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:36.506 "is_configured": true, 00:12:36.506 "data_offset": 2048, 00:12:36.506 "data_size": 63488 00:12:36.506 }, 00:12:36.506 { 00:12:36.506 "name": "BaseBdev4", 00:12:36.506 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:36.506 "is_configured": true, 00:12:36.506 "data_offset": 2048, 00:12:36.506 "data_size": 63488 00:12:36.506 } 00:12:36.506 ] 00:12:36.506 }' 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.506 [2024-11-26 15:28:34.811147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.506 [2024-11-26 15:28:34.894617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.506 [2024-11-26 15:28:34.894673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.506 [2024-11-26 15:28:34.894702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.506 [2024-11-26 15:28:34.894711] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.506 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.507 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.507 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.507 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.507 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.507 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.507 "name": "raid_bdev1", 00:12:36.507 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:36.507 "strip_size_kb": 0, 00:12:36.507 "state": "online", 00:12:36.507 "raid_level": "raid1", 00:12:36.507 "superblock": true, 00:12:36.507 "num_base_bdevs": 4, 00:12:36.507 "num_base_bdevs_discovered": 2, 00:12:36.507 "num_base_bdevs_operational": 2, 00:12:36.507 "base_bdevs_list": [ 00:12:36.507 { 00:12:36.507 "name": null, 00:12:36.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.507 "is_configured": false, 00:12:36.507 "data_offset": 0, 00:12:36.507 "data_size": 63488 00:12:36.507 }, 00:12:36.507 { 00:12:36.507 "name": null, 00:12:36.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.507 "is_configured": false, 00:12:36.507 "data_offset": 2048, 00:12:36.507 "data_size": 63488 00:12:36.507 }, 00:12:36.507 { 00:12:36.507 "name": "BaseBdev3", 00:12:36.507 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:36.507 "is_configured": true, 00:12:36.507 "data_offset": 2048, 00:12:36.507 "data_size": 63488 00:12:36.507 }, 00:12:36.507 { 00:12:36.507 "name": "BaseBdev4", 00:12:36.507 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:36.507 "is_configured": true, 00:12:36.507 "data_offset": 2048, 00:12:36.507 "data_size": 63488 00:12:36.507 } 00:12:36.507 ] 00:12:36.507 }' 00:12:36.507 15:28:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.507 15:28:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.078 "name": "raid_bdev1", 00:12:37.078 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:37.078 "strip_size_kb": 0, 00:12:37.078 "state": "online", 00:12:37.078 "raid_level": "raid1", 00:12:37.078 "superblock": true, 00:12:37.078 "num_base_bdevs": 4, 00:12:37.078 "num_base_bdevs_discovered": 2, 00:12:37.078 "num_base_bdevs_operational": 2, 00:12:37.078 "base_bdevs_list": [ 00:12:37.078 { 00:12:37.078 "name": null, 00:12:37.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.078 "is_configured": false, 00:12:37.078 "data_offset": 0, 00:12:37.078 "data_size": 63488 00:12:37.078 }, 00:12:37.078 { 00:12:37.078 "name": null, 00:12:37.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.078 "is_configured": false, 00:12:37.078 "data_offset": 2048, 00:12:37.078 "data_size": 63488 00:12:37.078 }, 00:12:37.078 { 00:12:37.078 "name": "BaseBdev3", 00:12:37.078 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:37.078 "is_configured": true, 00:12:37.078 "data_offset": 2048, 00:12:37.078 "data_size": 63488 00:12:37.078 }, 00:12:37.078 { 00:12:37.078 "name": "BaseBdev4", 00:12:37.078 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:37.078 "is_configured": true, 00:12:37.078 "data_offset": 2048, 00:12:37.078 "data_size": 63488 00:12:37.078 } 00:12:37.078 ] 00:12:37.078 }' 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.078 [2024-11-26 15:28:35.466987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:37.078 [2024-11-26 15:28:35.467047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.078 [2024-11-26 15:28:35.467087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:37.078 [2024-11-26 15:28:35.467098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.078 [2024-11-26 15:28:35.467496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.078 [2024-11-26 15:28:35.467526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:37.078 [2024-11-26 15:28:35.467592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:37.078 [2024-11-26 15:28:35.467613] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:37.078 [2024-11-26 15:28:35.467621] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:37.078 [2024-11-26 15:28:35.467635] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:37.078 BaseBdev1 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.078 15:28:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.020 15:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.280 15:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.280 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.280 "name": "raid_bdev1", 00:12:38.280 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:38.280 "strip_size_kb": 0, 00:12:38.280 "state": "online", 00:12:38.280 "raid_level": "raid1", 00:12:38.280 "superblock": true, 00:12:38.281 "num_base_bdevs": 4, 00:12:38.281 "num_base_bdevs_discovered": 2, 00:12:38.281 "num_base_bdevs_operational": 2, 00:12:38.281 "base_bdevs_list": [ 00:12:38.281 { 00:12:38.281 "name": null, 00:12:38.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.281 "is_configured": false, 00:12:38.281 "data_offset": 0, 00:12:38.281 "data_size": 63488 00:12:38.281 }, 00:12:38.281 { 00:12:38.281 "name": null, 00:12:38.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.281 "is_configured": false, 00:12:38.281 "data_offset": 2048, 00:12:38.281 "data_size": 63488 00:12:38.281 }, 00:12:38.281 { 00:12:38.281 "name": "BaseBdev3", 00:12:38.281 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:38.281 "is_configured": true, 00:12:38.281 "data_offset": 2048, 00:12:38.281 "data_size": 63488 00:12:38.281 }, 00:12:38.281 { 00:12:38.281 "name": "BaseBdev4", 00:12:38.281 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:38.281 "is_configured": true, 00:12:38.281 "data_offset": 2048, 00:12:38.281 "data_size": 63488 00:12:38.281 } 00:12:38.281 ] 00:12:38.281 }' 00:12:38.281 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.281 15:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.541 "name": "raid_bdev1", 00:12:38.541 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:38.541 "strip_size_kb": 0, 00:12:38.541 "state": "online", 00:12:38.541 "raid_level": "raid1", 00:12:38.541 "superblock": true, 00:12:38.541 "num_base_bdevs": 4, 00:12:38.541 "num_base_bdevs_discovered": 2, 00:12:38.541 "num_base_bdevs_operational": 2, 00:12:38.541 "base_bdevs_list": [ 00:12:38.541 { 00:12:38.541 "name": null, 00:12:38.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.541 "is_configured": false, 00:12:38.541 "data_offset": 0, 00:12:38.541 "data_size": 63488 00:12:38.541 }, 00:12:38.541 { 00:12:38.541 "name": null, 00:12:38.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.541 "is_configured": false, 00:12:38.541 "data_offset": 2048, 00:12:38.541 "data_size": 63488 00:12:38.541 }, 00:12:38.541 { 00:12:38.541 "name": "BaseBdev3", 00:12:38.541 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:38.541 "is_configured": true, 00:12:38.541 "data_offset": 2048, 00:12:38.541 "data_size": 63488 00:12:38.541 }, 00:12:38.541 { 00:12:38.541 "name": "BaseBdev4", 00:12:38.541 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:38.541 "is_configured": true, 00:12:38.541 "data_offset": 2048, 00:12:38.541 "data_size": 63488 00:12:38.541 } 00:12:38.541 ] 00:12:38.541 }' 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.541 15:28:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.801 [2024-11-26 15:28:37.051494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.801 [2024-11-26 15:28:37.051658] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:38.801 [2024-11-26 15:28:37.051674] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:38.801 request: 00:12:38.801 { 00:12:38.801 "base_bdev": "BaseBdev1", 00:12:38.801 "raid_bdev": "raid_bdev1", 00:12:38.801 "method": "bdev_raid_add_base_bdev", 00:12:38.801 "req_id": 1 00:12:38.801 } 00:12:38.801 Got JSON-RPC error response 00:12:38.801 response: 00:12:38.801 { 00:12:38.801 "code": -22, 00:12:38.801 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:38.801 } 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:38.801 15:28:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.740 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.740 "name": "raid_bdev1", 00:12:39.740 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:39.740 "strip_size_kb": 0, 00:12:39.740 "state": "online", 00:12:39.740 "raid_level": "raid1", 00:12:39.740 "superblock": true, 00:12:39.740 "num_base_bdevs": 4, 00:12:39.740 "num_base_bdevs_discovered": 2, 00:12:39.740 "num_base_bdevs_operational": 2, 00:12:39.740 "base_bdevs_list": [ 00:12:39.740 { 00:12:39.740 "name": null, 00:12:39.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.740 "is_configured": false, 00:12:39.740 "data_offset": 0, 00:12:39.740 "data_size": 63488 00:12:39.740 }, 00:12:39.740 { 00:12:39.740 "name": null, 00:12:39.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.740 "is_configured": false, 00:12:39.740 "data_offset": 2048, 00:12:39.740 "data_size": 63488 00:12:39.740 }, 00:12:39.740 { 00:12:39.740 "name": "BaseBdev3", 00:12:39.740 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:39.740 "is_configured": true, 00:12:39.740 "data_offset": 2048, 00:12:39.740 "data_size": 63488 00:12:39.740 }, 00:12:39.740 { 00:12:39.740 "name": "BaseBdev4", 00:12:39.740 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:39.740 "is_configured": true, 00:12:39.740 "data_offset": 2048, 00:12:39.740 "data_size": 63488 00:12:39.740 } 00:12:39.741 ] 00:12:39.741 }' 00:12:39.741 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.741 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.311 "name": "raid_bdev1", 00:12:40.311 "uuid": "4a7c11e0-a07f-4a4f-8d6c-12ca68585f5c", 00:12:40.311 "strip_size_kb": 0, 00:12:40.311 "state": "online", 00:12:40.311 "raid_level": "raid1", 00:12:40.311 "superblock": true, 00:12:40.311 "num_base_bdevs": 4, 00:12:40.311 "num_base_bdevs_discovered": 2, 00:12:40.311 "num_base_bdevs_operational": 2, 00:12:40.311 "base_bdevs_list": [ 00:12:40.311 { 00:12:40.311 "name": null, 00:12:40.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.311 "is_configured": false, 00:12:40.311 "data_offset": 0, 00:12:40.311 "data_size": 63488 00:12:40.311 }, 00:12:40.311 { 00:12:40.311 "name": null, 00:12:40.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.311 "is_configured": false, 00:12:40.311 "data_offset": 2048, 00:12:40.311 "data_size": 63488 00:12:40.311 }, 00:12:40.311 { 00:12:40.311 "name": "BaseBdev3", 00:12:40.311 "uuid": "5074e769-1c65-543e-9515-81de689b410b", 00:12:40.311 "is_configured": true, 00:12:40.311 "data_offset": 2048, 00:12:40.311 "data_size": 63488 00:12:40.311 }, 00:12:40.311 { 00:12:40.311 "name": "BaseBdev4", 00:12:40.311 "uuid": "0e33d87d-7b6d-5275-a357-ebec3d63f65c", 00:12:40.311 "is_configured": true, 00:12:40.311 "data_offset": 2048, 00:12:40.311 "data_size": 63488 00:12:40.311 } 00:12:40.311 ] 00:12:40.311 }' 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90135 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90135 ']' 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 90135 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90135 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.311 killing process with pid 90135 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90135' 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 90135 00:12:40.311 Received shutdown signal, test time was about 60.000000 seconds 00:12:40.311 00:12:40.311 Latency(us) 00:12:40.311 [2024-11-26T15:28:38.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.311 [2024-11-26T15:28:38.790Z] =================================================================================================================== 00:12:40.311 [2024-11-26T15:28:38.790Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:40.311 [2024-11-26 15:28:38.694139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.311 [2024-11-26 15:28:38.694266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:40.311 [2024-11-26 15:28:38.694339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:40.311 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 90135 00:12:40.312 [2024-11-26 15:28:38.694352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:40.312 [2024-11-26 15:28:38.744089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.572 15:28:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:40.572 00:12:40.572 real 0m23.118s 00:12:40.572 user 0m28.279s 00:12:40.572 sys 0m3.658s 00:12:40.572 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.572 15:28:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.572 ************************************ 00:12:40.572 END TEST raid_rebuild_test_sb 00:12:40.572 ************************************ 00:12:40.572 15:28:39 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:40.572 15:28:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:40.572 15:28:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.572 15:28:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.572 ************************************ 00:12:40.572 START TEST raid_rebuild_test_io 00:12:40.572 ************************************ 00:12:40.572 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:12:40.572 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:40.572 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:40.572 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:40.572 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90872 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90872 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 90872 ']' 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.573 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.833 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:40.833 Zero copy mechanism will not be used. 00:12:40.833 [2024-11-26 15:28:39.124773] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:40.833 [2024-11-26 15:28:39.124880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90872 ] 00:12:40.833 [2024-11-26 15:28:39.259371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:40.833 [2024-11-26 15:28:39.299831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.093 [2024-11-26 15:28:39.324521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.093 [2024-11-26 15:28:39.367416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.093 [2024-11-26 15:28:39.367457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.663 BaseBdev1_malloc 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.663 [2024-11-26 15:28:39.959422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:41.663 [2024-11-26 15:28:39.959499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.663 [2024-11-26 15:28:39.959527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:41.663 [2024-11-26 15:28:39.959548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.663 [2024-11-26 15:28:39.961622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.663 [2024-11-26 15:28:39.961661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.663 BaseBdev1 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.663 BaseBdev2_malloc 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.663 [2024-11-26 15:28:39.979912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:41.663 [2024-11-26 15:28:39.979969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.663 [2024-11-26 15:28:39.980004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:41.663 [2024-11-26 15:28:39.980014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.663 [2024-11-26 15:28:39.982095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.663 [2024-11-26 15:28:39.982135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.663 BaseBdev2 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.663 BaseBdev3_malloc 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.663 15:28:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.663 [2024-11-26 15:28:40.004547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:41.663 [2024-11-26 15:28:40.004604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.663 [2024-11-26 15:28:40.004623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:41.664 [2024-11-26 15:28:40.004640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.664 [2024-11-26 15:28:40.006690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.664 [2024-11-26 15:28:40.006726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:41.664 BaseBdev3 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.664 BaseBdev4_malloc 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.664 [2024-11-26 15:28:40.050970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:41.664 [2024-11-26 15:28:40.051082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.664 [2024-11-26 15:28:40.051125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:41.664 [2024-11-26 15:28:40.051150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.664 [2024-11-26 15:28:40.054784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.664 [2024-11-26 15:28:40.054841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:41.664 BaseBdev4 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.664 spare_malloc 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.664 spare_delay 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.664 [2024-11-26 15:28:40.092108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:41.664 [2024-11-26 15:28:40.092182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.664 [2024-11-26 15:28:40.092214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:41.664 [2024-11-26 15:28:40.092227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.664 [2024-11-26 15:28:40.094246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.664 [2024-11-26 15:28:40.094280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:41.664 spare 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.664 [2024-11-26 15:28:40.104198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.664 [2024-11-26 15:28:40.106004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.664 [2024-11-26 15:28:40.106069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.664 [2024-11-26 15:28:40.106113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.664 [2024-11-26 15:28:40.106194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:41.664 [2024-11-26 15:28:40.106218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:41.664 [2024-11-26 15:28:40.106474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:41.664 [2024-11-26 15:28:40.106624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:41.664 [2024-11-26 15:28:40.106641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:41.664 [2024-11-26 15:28:40.106747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.664 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.924 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.924 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.924 "name": "raid_bdev1", 00:12:41.924 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:41.924 "strip_size_kb": 0, 00:12:41.924 "state": "online", 00:12:41.924 "raid_level": "raid1", 00:12:41.924 "superblock": false, 00:12:41.924 "num_base_bdevs": 4, 00:12:41.924 "num_base_bdevs_discovered": 4, 00:12:41.924 "num_base_bdevs_operational": 4, 00:12:41.924 "base_bdevs_list": [ 00:12:41.924 { 00:12:41.924 "name": "BaseBdev1", 00:12:41.924 "uuid": "ee71ef9b-3311-543b-89a6-b04b055d040b", 00:12:41.924 "is_configured": true, 00:12:41.924 "data_offset": 0, 00:12:41.924 "data_size": 65536 00:12:41.924 }, 00:12:41.924 { 00:12:41.924 "name": "BaseBdev2", 00:12:41.924 "uuid": "4f1bfbe4-53ab-5551-8bfd-0cb07db737d0", 00:12:41.924 "is_configured": true, 00:12:41.924 "data_offset": 0, 00:12:41.924 "data_size": 65536 00:12:41.924 }, 00:12:41.924 { 00:12:41.924 "name": "BaseBdev3", 00:12:41.924 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:41.924 "is_configured": true, 00:12:41.924 "data_offset": 0, 00:12:41.924 "data_size": 65536 00:12:41.924 }, 00:12:41.924 { 00:12:41.924 "name": "BaseBdev4", 00:12:41.924 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:41.924 "is_configured": true, 00:12:41.924 "data_offset": 0, 00:12:41.924 "data_size": 65536 00:12:41.924 } 00:12:41.924 ] 00:12:41.924 }' 00:12:41.924 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.924 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.215 [2024-11-26 15:28:40.524561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.215 [2024-11-26 15:28:40.616281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.215 "name": "raid_bdev1", 00:12:42.215 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:42.215 "strip_size_kb": 0, 00:12:42.215 "state": "online", 00:12:42.215 "raid_level": "raid1", 00:12:42.215 "superblock": false, 00:12:42.215 "num_base_bdevs": 4, 00:12:42.215 "num_base_bdevs_discovered": 3, 00:12:42.215 "num_base_bdevs_operational": 3, 00:12:42.215 "base_bdevs_list": [ 00:12:42.215 { 00:12:42.215 "name": null, 00:12:42.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.215 "is_configured": false, 00:12:42.215 "data_offset": 0, 00:12:42.215 "data_size": 65536 00:12:42.215 }, 00:12:42.215 { 00:12:42.215 "name": "BaseBdev2", 00:12:42.215 "uuid": "4f1bfbe4-53ab-5551-8bfd-0cb07db737d0", 00:12:42.215 "is_configured": true, 00:12:42.215 "data_offset": 0, 00:12:42.215 "data_size": 65536 00:12:42.215 }, 00:12:42.215 { 00:12:42.215 "name": "BaseBdev3", 00:12:42.215 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:42.215 "is_configured": true, 00:12:42.215 "data_offset": 0, 00:12:42.215 "data_size": 65536 00:12:42.215 }, 00:12:42.215 { 00:12:42.215 "name": "BaseBdev4", 00:12:42.215 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:42.215 "is_configured": true, 00:12:42.215 "data_offset": 0, 00:12:42.215 "data_size": 65536 00:12:42.215 } 00:12:42.215 ] 00:12:42.215 }' 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.215 15:28:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.487 [2024-11-26 15:28:40.686279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:42.487 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:42.487 Zero copy mechanism will not be used. 00:12:42.487 Running I/O for 60 seconds... 00:12:42.747 15:28:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:42.747 15:28:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.747 15:28:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.747 [2024-11-26 15:28:41.049974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.747 15:28:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.747 15:28:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:42.747 [2024-11-26 15:28:41.091179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:42.747 [2024-11-26 15:28:41.093166] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.747 [2024-11-26 15:28:41.208271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:42.747 [2024-11-26 15:28:41.208603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:43.007 [2024-11-26 15:28:41.334679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:43.007 [2024-11-26 15:28:41.335382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:43.267 [2024-11-26 15:28:41.674991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:43.527 156.00 IOPS, 468.00 MiB/s [2024-11-26T15:28:42.006Z] [2024-11-26 15:28:41.890092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:43.527 [2024-11-26 15:28:41.890706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.787 "name": "raid_bdev1", 00:12:43.787 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:43.787 "strip_size_kb": 0, 00:12:43.787 "state": "online", 00:12:43.787 "raid_level": "raid1", 00:12:43.787 "superblock": false, 00:12:43.787 "num_base_bdevs": 4, 00:12:43.787 "num_base_bdevs_discovered": 4, 00:12:43.787 "num_base_bdevs_operational": 4, 00:12:43.787 "process": { 00:12:43.787 "type": "rebuild", 00:12:43.787 "target": "spare", 00:12:43.787 "progress": { 00:12:43.787 "blocks": 10240, 00:12:43.787 "percent": 15 00:12:43.787 } 00:12:43.787 }, 00:12:43.787 "base_bdevs_list": [ 00:12:43.787 { 00:12:43.787 "name": "spare", 00:12:43.787 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:43.787 "is_configured": true, 00:12:43.787 "data_offset": 0, 00:12:43.787 "data_size": 65536 00:12:43.787 }, 00:12:43.787 { 00:12:43.787 "name": "BaseBdev2", 00:12:43.787 "uuid": "4f1bfbe4-53ab-5551-8bfd-0cb07db737d0", 00:12:43.787 "is_configured": true, 00:12:43.787 "data_offset": 0, 00:12:43.787 "data_size": 65536 00:12:43.787 }, 00:12:43.787 { 00:12:43.787 "name": "BaseBdev3", 00:12:43.787 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:43.787 "is_configured": true, 00:12:43.787 "data_offset": 0, 00:12:43.787 "data_size": 65536 00:12:43.787 }, 00:12:43.787 { 00:12:43.787 "name": "BaseBdev4", 00:12:43.787 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:43.787 "is_configured": true, 00:12:43.787 "data_offset": 0, 00:12:43.787 "data_size": 65536 00:12:43.787 } 00:12:43.787 ] 00:12:43.787 }' 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.787 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.787 [2024-11-26 15:28:42.213943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.787 [2024-11-26 15:28:42.246420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:44.047 [2024-11-26 15:28:42.353968] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.047 [2024-11-26 15:28:42.363250] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.047 [2024-11-26 15:28:42.363303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.047 [2024-11-26 15:28:42.363319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:44.047 [2024-11-26 15:28:42.380753] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.047 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.048 "name": "raid_bdev1", 00:12:44.048 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:44.048 "strip_size_kb": 0, 00:12:44.048 "state": "online", 00:12:44.048 "raid_level": "raid1", 00:12:44.048 "superblock": false, 00:12:44.048 "num_base_bdevs": 4, 00:12:44.048 "num_base_bdevs_discovered": 3, 00:12:44.048 "num_base_bdevs_operational": 3, 00:12:44.048 "base_bdevs_list": [ 00:12:44.048 { 00:12:44.048 "name": null, 00:12:44.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.048 "is_configured": false, 00:12:44.048 "data_offset": 0, 00:12:44.048 "data_size": 65536 00:12:44.048 }, 00:12:44.048 { 00:12:44.048 "name": "BaseBdev2", 00:12:44.048 "uuid": "4f1bfbe4-53ab-5551-8bfd-0cb07db737d0", 00:12:44.048 "is_configured": true, 00:12:44.048 "data_offset": 0, 00:12:44.048 "data_size": 65536 00:12:44.048 }, 00:12:44.048 { 00:12:44.048 "name": "BaseBdev3", 00:12:44.048 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:44.048 "is_configured": true, 00:12:44.048 "data_offset": 0, 00:12:44.048 "data_size": 65536 00:12:44.048 }, 00:12:44.048 { 00:12:44.048 "name": "BaseBdev4", 00:12:44.048 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:44.048 "is_configured": true, 00:12:44.048 "data_offset": 0, 00:12:44.048 "data_size": 65536 00:12:44.048 } 00:12:44.048 ] 00:12:44.048 }' 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.048 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.568 128.50 IOPS, 385.50 MiB/s [2024-11-26T15:28:43.047Z] 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.568 "name": "raid_bdev1", 00:12:44.568 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:44.568 "strip_size_kb": 0, 00:12:44.568 "state": "online", 00:12:44.568 "raid_level": "raid1", 00:12:44.568 "superblock": false, 00:12:44.568 "num_base_bdevs": 4, 00:12:44.568 "num_base_bdevs_discovered": 3, 00:12:44.568 "num_base_bdevs_operational": 3, 00:12:44.568 "base_bdevs_list": [ 00:12:44.568 { 00:12:44.568 "name": null, 00:12:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.568 "is_configured": false, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 65536 00:12:44.568 }, 00:12:44.568 { 00:12:44.568 "name": "BaseBdev2", 00:12:44.568 "uuid": "4f1bfbe4-53ab-5551-8bfd-0cb07db737d0", 00:12:44.568 "is_configured": true, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 65536 00:12:44.568 }, 00:12:44.568 { 00:12:44.568 "name": "BaseBdev3", 00:12:44.568 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:44.568 "is_configured": true, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 65536 00:12:44.568 }, 00:12:44.568 { 00:12:44.568 "name": "BaseBdev4", 00:12:44.568 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:44.568 "is_configured": true, 00:12:44.568 "data_offset": 0, 00:12:44.568 "data_size": 65536 00:12:44.568 } 00:12:44.568 ] 00:12:44.568 }' 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.568 [2024-11-26 15:28:42.930798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.568 15:28:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:44.568 [2024-11-26 15:28:43.004036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:12:44.568 [2024-11-26 15:28:43.005992] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.828 [2024-11-26 15:28:43.142329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.828 [2024-11-26 15:28:43.264108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.828 [2024-11-26 15:28:43.264353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:45.398 [2024-11-26 15:28:43.592168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:45.398 [2024-11-26 15:28:43.593254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:45.398 134.00 IOPS, 402.00 MiB/s [2024-11-26T15:28:43.877Z] [2024-11-26 15:28:43.825117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.659 15:28:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.659 "name": "raid_bdev1", 00:12:45.659 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:45.659 "strip_size_kb": 0, 00:12:45.659 "state": "online", 00:12:45.659 "raid_level": "raid1", 00:12:45.659 "superblock": false, 00:12:45.659 "num_base_bdevs": 4, 00:12:45.659 "num_base_bdevs_discovered": 4, 00:12:45.659 "num_base_bdevs_operational": 4, 00:12:45.659 "process": { 00:12:45.659 "type": "rebuild", 00:12:45.659 "target": "spare", 00:12:45.659 "progress": { 00:12:45.659 "blocks": 12288, 00:12:45.659 "percent": 18 00:12:45.659 } 00:12:45.659 }, 00:12:45.659 "base_bdevs_list": [ 00:12:45.659 { 00:12:45.659 "name": "spare", 00:12:45.659 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:45.659 "is_configured": true, 00:12:45.659 "data_offset": 0, 00:12:45.659 "data_size": 65536 00:12:45.659 }, 00:12:45.659 { 00:12:45.659 "name": "BaseBdev2", 00:12:45.659 "uuid": "4f1bfbe4-53ab-5551-8bfd-0cb07db737d0", 00:12:45.659 "is_configured": true, 00:12:45.659 "data_offset": 0, 00:12:45.659 "data_size": 65536 00:12:45.659 }, 00:12:45.659 { 00:12:45.659 "name": "BaseBdev3", 00:12:45.659 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:45.659 "is_configured": true, 00:12:45.659 "data_offset": 0, 00:12:45.659 "data_size": 65536 00:12:45.659 }, 00:12:45.659 { 00:12:45.659 "name": "BaseBdev4", 00:12:45.659 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:45.659 "is_configured": true, 00:12:45.659 "data_offset": 0, 00:12:45.659 "data_size": 65536 00:12:45.659 } 00:12:45.659 ] 00:12:45.659 }' 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.659 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.659 [2024-11-26 15:28:44.127482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.919 [2024-11-26 15:28:44.203445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:45.919 [2024-11-26 15:28:44.301523] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:12:45.919 [2024-11-26 15:28:44.301560] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:12:45.919 [2024-11-26 15:28:44.303654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:45.919 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.920 "name": "raid_bdev1", 00:12:45.920 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:45.920 "strip_size_kb": 0, 00:12:45.920 "state": "online", 00:12:45.920 "raid_level": "raid1", 00:12:45.920 "superblock": false, 00:12:45.920 "num_base_bdevs": 4, 00:12:45.920 "num_base_bdevs_discovered": 3, 00:12:45.920 "num_base_bdevs_operational": 3, 00:12:45.920 "process": { 00:12:45.920 "type": "rebuild", 00:12:45.920 "target": "spare", 00:12:45.920 "progress": { 00:12:45.920 "blocks": 16384, 00:12:45.920 "percent": 25 00:12:45.920 } 00:12:45.920 }, 00:12:45.920 "base_bdevs_list": [ 00:12:45.920 { 00:12:45.920 "name": "spare", 00:12:45.920 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:45.920 "is_configured": true, 00:12:45.920 "data_offset": 0, 00:12:45.920 "data_size": 65536 00:12:45.920 }, 00:12:45.920 { 00:12:45.920 "name": null, 00:12:45.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.920 "is_configured": false, 00:12:45.920 "data_offset": 0, 00:12:45.920 "data_size": 65536 00:12:45.920 }, 00:12:45.920 { 00:12:45.920 "name": "BaseBdev3", 00:12:45.920 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:45.920 "is_configured": true, 00:12:45.920 "data_offset": 0, 00:12:45.920 "data_size": 65536 00:12:45.920 }, 00:12:45.920 { 00:12:45.920 "name": "BaseBdev4", 00:12:45.920 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:45.920 "is_configured": true, 00:12:45.920 "data_offset": 0, 00:12:45.920 "data_size": 65536 00:12:45.920 } 00:12:45.920 ] 00:12:45.920 }' 00:12:45.920 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.180 "name": "raid_bdev1", 00:12:46.180 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:46.180 "strip_size_kb": 0, 00:12:46.180 "state": "online", 00:12:46.180 "raid_level": "raid1", 00:12:46.180 "superblock": false, 00:12:46.180 "num_base_bdevs": 4, 00:12:46.180 "num_base_bdevs_discovered": 3, 00:12:46.180 "num_base_bdevs_operational": 3, 00:12:46.180 "process": { 00:12:46.180 "type": "rebuild", 00:12:46.180 "target": "spare", 00:12:46.180 "progress": { 00:12:46.180 "blocks": 18432, 00:12:46.180 "percent": 28 00:12:46.180 } 00:12:46.180 }, 00:12:46.180 "base_bdevs_list": [ 00:12:46.180 { 00:12:46.180 "name": "spare", 00:12:46.180 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:46.180 "is_configured": true, 00:12:46.180 "data_offset": 0, 00:12:46.180 "data_size": 65536 00:12:46.180 }, 00:12:46.180 { 00:12:46.180 "name": null, 00:12:46.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.180 "is_configured": false, 00:12:46.180 "data_offset": 0, 00:12:46.180 "data_size": 65536 00:12:46.180 }, 00:12:46.180 { 00:12:46.180 "name": "BaseBdev3", 00:12:46.180 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:46.180 "is_configured": true, 00:12:46.180 "data_offset": 0, 00:12:46.180 "data_size": 65536 00:12:46.180 }, 00:12:46.180 { 00:12:46.180 "name": "BaseBdev4", 00:12:46.180 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:46.180 "is_configured": true, 00:12:46.180 "data_offset": 0, 00:12:46.180 "data_size": 65536 00:12:46.180 } 00:12:46.180 ] 00:12:46.180 }' 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.180 [2024-11-26 15:28:44.526058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.180 15:28:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.441 120.00 IOPS, 360.00 MiB/s [2024-11-26T15:28:44.920Z] [2024-11-26 15:28:44.877615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:47.012 [2024-11-26 15:28:45.339527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:47.302 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.303 "name": "raid_bdev1", 00:12:47.303 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:47.303 "strip_size_kb": 0, 00:12:47.303 "state": "online", 00:12:47.303 "raid_level": "raid1", 00:12:47.303 "superblock": false, 00:12:47.303 "num_base_bdevs": 4, 00:12:47.303 "num_base_bdevs_discovered": 3, 00:12:47.303 "num_base_bdevs_operational": 3, 00:12:47.303 "process": { 00:12:47.303 "type": "rebuild", 00:12:47.303 "target": "spare", 00:12:47.303 "progress": { 00:12:47.303 "blocks": 36864, 00:12:47.303 "percent": 56 00:12:47.303 } 00:12:47.303 }, 00:12:47.303 "base_bdevs_list": [ 00:12:47.303 { 00:12:47.303 "name": "spare", 00:12:47.303 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:47.303 "is_configured": true, 00:12:47.303 "data_offset": 0, 00:12:47.303 "data_size": 65536 00:12:47.303 }, 00:12:47.303 { 00:12:47.303 "name": null, 00:12:47.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.303 "is_configured": false, 00:12:47.303 "data_offset": 0, 00:12:47.303 "data_size": 65536 00:12:47.303 }, 00:12:47.303 { 00:12:47.303 "name": "BaseBdev3", 00:12:47.303 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:47.303 "is_configured": true, 00:12:47.303 "data_offset": 0, 00:12:47.303 "data_size": 65536 00:12:47.303 }, 00:12:47.303 { 00:12:47.303 "name": "BaseBdev4", 00:12:47.303 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:47.303 "is_configured": true, 00:12:47.303 "data_offset": 0, 00:12:47.303 "data_size": 65536 00:12:47.303 } 00:12:47.303 ] 00:12:47.303 }' 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.303 108.00 IOPS, 324.00 MiB/s [2024-11-26T15:28:45.782Z] 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.303 15:28:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:47.869 [2024-11-26 15:28:46.105691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:48.127 [2024-11-26 15:28:46.441730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:48.127 [2024-11-26 15:28:46.442144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:48.387 [2024-11-26 15:28:46.658795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:48.387 96.83 IOPS, 290.50 MiB/s [2024-11-26T15:28:46.866Z] 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.387 "name": "raid_bdev1", 00:12:48.387 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:48.387 "strip_size_kb": 0, 00:12:48.387 "state": "online", 00:12:48.387 "raid_level": "raid1", 00:12:48.387 "superblock": false, 00:12:48.387 "num_base_bdevs": 4, 00:12:48.387 "num_base_bdevs_discovered": 3, 00:12:48.387 "num_base_bdevs_operational": 3, 00:12:48.387 "process": { 00:12:48.387 "type": "rebuild", 00:12:48.387 "target": "spare", 00:12:48.387 "progress": { 00:12:48.387 "blocks": 53248, 00:12:48.387 "percent": 81 00:12:48.387 } 00:12:48.387 }, 00:12:48.387 "base_bdevs_list": [ 00:12:48.387 { 00:12:48.387 "name": "spare", 00:12:48.387 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:48.387 "is_configured": true, 00:12:48.387 "data_offset": 0, 00:12:48.387 "data_size": 65536 00:12:48.387 }, 00:12:48.387 { 00:12:48.387 "name": null, 00:12:48.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.387 "is_configured": false, 00:12:48.387 "data_offset": 0, 00:12:48.387 "data_size": 65536 00:12:48.387 }, 00:12:48.387 { 00:12:48.387 "name": "BaseBdev3", 00:12:48.387 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:48.387 "is_configured": true, 00:12:48.387 "data_offset": 0, 00:12:48.387 "data_size": 65536 00:12:48.387 }, 00:12:48.387 { 00:12:48.387 "name": "BaseBdev4", 00:12:48.387 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:48.387 "is_configured": true, 00:12:48.387 "data_offset": 0, 00:12:48.387 "data_size": 65536 00:12:48.387 } 00:12:48.387 ] 00:12:48.387 }' 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.387 15:28:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.325 [2024-11-26 15:28:47.433095] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:49.325 [2024-11-26 15:28:47.538170] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:49.325 [2024-11-26 15:28:47.540431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.584 88.29 IOPS, 264.86 MiB/s [2024-11-26T15:28:48.063Z] 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.584 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.584 "name": "raid_bdev1", 00:12:49.584 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:49.584 "strip_size_kb": 0, 00:12:49.584 "state": "online", 00:12:49.584 "raid_level": "raid1", 00:12:49.584 "superblock": false, 00:12:49.584 "num_base_bdevs": 4, 00:12:49.584 "num_base_bdevs_discovered": 3, 00:12:49.584 "num_base_bdevs_operational": 3, 00:12:49.584 "base_bdevs_list": [ 00:12:49.584 { 00:12:49.584 "name": "spare", 00:12:49.584 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:49.584 "is_configured": true, 00:12:49.584 "data_offset": 0, 00:12:49.584 "data_size": 65536 00:12:49.584 }, 00:12:49.584 { 00:12:49.584 "name": null, 00:12:49.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.584 "is_configured": false, 00:12:49.584 "data_offset": 0, 00:12:49.584 "data_size": 65536 00:12:49.584 }, 00:12:49.584 { 00:12:49.584 "name": "BaseBdev3", 00:12:49.584 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:49.585 "is_configured": true, 00:12:49.585 "data_offset": 0, 00:12:49.585 "data_size": 65536 00:12:49.585 }, 00:12:49.585 { 00:12:49.585 "name": "BaseBdev4", 00:12:49.585 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:49.585 "is_configured": true, 00:12:49.585 "data_offset": 0, 00:12:49.585 "data_size": 65536 00:12:49.585 } 00:12:49.585 ] 00:12:49.585 }' 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.585 15:28:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.585 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.585 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.585 "name": "raid_bdev1", 00:12:49.585 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:49.585 "strip_size_kb": 0, 00:12:49.585 "state": "online", 00:12:49.585 "raid_level": "raid1", 00:12:49.585 "superblock": false, 00:12:49.585 "num_base_bdevs": 4, 00:12:49.585 "num_base_bdevs_discovered": 3, 00:12:49.585 "num_base_bdevs_operational": 3, 00:12:49.585 "base_bdevs_list": [ 00:12:49.585 { 00:12:49.585 "name": "spare", 00:12:49.585 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:49.585 "is_configured": true, 00:12:49.585 "data_offset": 0, 00:12:49.585 "data_size": 65536 00:12:49.585 }, 00:12:49.585 { 00:12:49.585 "name": null, 00:12:49.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.585 "is_configured": false, 00:12:49.585 "data_offset": 0, 00:12:49.585 "data_size": 65536 00:12:49.585 }, 00:12:49.585 { 00:12:49.585 "name": "BaseBdev3", 00:12:49.585 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:49.585 "is_configured": true, 00:12:49.585 "data_offset": 0, 00:12:49.585 "data_size": 65536 00:12:49.585 }, 00:12:49.585 { 00:12:49.585 "name": "BaseBdev4", 00:12:49.585 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:49.585 "is_configured": true, 00:12:49.585 "data_offset": 0, 00:12:49.585 "data_size": 65536 00:12:49.585 } 00:12:49.585 ] 00:12:49.585 }' 00:12:49.585 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.844 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.845 "name": "raid_bdev1", 00:12:49.845 "uuid": "81faced1-ba79-4e52-995e-efc2d48760bd", 00:12:49.845 "strip_size_kb": 0, 00:12:49.845 "state": "online", 00:12:49.845 "raid_level": "raid1", 00:12:49.845 "superblock": false, 00:12:49.845 "num_base_bdevs": 4, 00:12:49.845 "num_base_bdevs_discovered": 3, 00:12:49.845 "num_base_bdevs_operational": 3, 00:12:49.845 "base_bdevs_list": [ 00:12:49.845 { 00:12:49.845 "name": "spare", 00:12:49.845 "uuid": "d84c4d2c-38f4-5a42-872c-ef7cb5e6b752", 00:12:49.845 "is_configured": true, 00:12:49.845 "data_offset": 0, 00:12:49.845 "data_size": 65536 00:12:49.845 }, 00:12:49.845 { 00:12:49.845 "name": null, 00:12:49.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.845 "is_configured": false, 00:12:49.845 "data_offset": 0, 00:12:49.845 "data_size": 65536 00:12:49.845 }, 00:12:49.845 { 00:12:49.845 "name": "BaseBdev3", 00:12:49.845 "uuid": "84790dc6-a88c-51ea-bfd4-7ead3e9ddda5", 00:12:49.845 "is_configured": true, 00:12:49.845 "data_offset": 0, 00:12:49.845 "data_size": 65536 00:12:49.845 }, 00:12:49.845 { 00:12:49.845 "name": "BaseBdev4", 00:12:49.845 "uuid": "aff87d24-1a14-5dba-adef-2b8e6d0ef1fe", 00:12:49.845 "is_configured": true, 00:12:49.845 "data_offset": 0, 00:12:49.845 "data_size": 65536 00:12:49.845 } 00:12:49.845 ] 00:12:49.845 }' 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.845 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.104 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.104 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 [2024-11-26 15:28:48.545520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.104 [2024-11-26 15:28:48.545593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.364 00:12:50.364 Latency(us) 00:12:50.364 [2024-11-26T15:28:48.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.364 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:50.364 raid_bdev1 : 7.93 82.74 248.22 0.00 0.00 17163.32 290.96 117899.68 00:12:50.364 [2024-11-26T15:28:48.843Z] =================================================================================================================== 00:12:50.364 [2024-11-26T15:28:48.843Z] Total : 82.74 248.22 0.00 0.00 17163.32 290.96 117899.68 00:12:50.364 [2024-11-26 15:28:48.620575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.364 [2024-11-26 15:28:48.620653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.364 [2024-11-26 15:28:48.620778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.364 [2024-11-26 15:28:48.620830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:50.364 { 00:12:50.364 "results": [ 00:12:50.364 { 00:12:50.364 "job": "raid_bdev1", 00:12:50.364 "core_mask": "0x1", 00:12:50.364 "workload": "randrw", 00:12:50.364 "percentage": 50, 00:12:50.364 "status": "finished", 00:12:50.364 "queue_depth": 2, 00:12:50.364 "io_size": 3145728, 00:12:50.364 "runtime": 7.92857, 00:12:50.364 "iops": 82.73875364662229, 00:12:50.364 "mibps": 248.21626093986686, 00:12:50.364 "io_failed": 0, 00:12:50.364 "io_timeout": 0, 00:12:50.364 "avg_latency_us": 17163.320712680674, 00:12:50.364 "min_latency_us": 290.96487405212235, 00:12:50.364 "max_latency_us": 117899.6809901508 00:12:50.364 } 00:12:50.364 ], 00:12:50.364 "core_count": 1 00:12:50.364 } 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.364 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:50.624 /dev/nbd0 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:50.624 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.624 1+0 records in 00:12:50.625 1+0 records out 00:12:50.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575547 s, 7.1 MB/s 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.625 15:28:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:50.885 /dev/nbd1 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:50.885 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.885 1+0 records in 00:12:50.885 1+0 records out 00:12:50.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284399 s, 14.4 MB/s 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.886 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.146 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:51.406 /dev/nbd1 00:12:51.406 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.406 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.406 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.406 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:51.406 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.406 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.407 1+0 records in 00:12:51.407 1+0 records out 00:12:51.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246771 s, 16.6 MB/s 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.407 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.667 15:28:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 90872 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 90872 ']' 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 90872 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.667 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90872 00:12:51.927 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.927 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.927 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90872' 00:12:51.927 killing process with pid 90872 00:12:51.927 Received shutdown signal, test time was about 9.471450 seconds 00:12:51.927 00:12:51.927 Latency(us) 00:12:51.927 [2024-11-26T15:28:50.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.927 [2024-11-26T15:28:50.406Z] =================================================================================================================== 00:12:51.927 [2024-11-26T15:28:50.406Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:51.927 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 90872 00:12:51.927 [2024-11-26 15:28:50.160429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.927 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 90872 00:12:51.927 [2024-11-26 15:28:50.205339] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:52.188 00:12:52.188 real 0m11.382s 00:12:52.188 user 0m14.606s 00:12:52.188 sys 0m1.786s 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.188 ************************************ 00:12:52.188 END TEST raid_rebuild_test_io 00:12:52.188 ************************************ 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.188 15:28:50 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:52.188 15:28:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:52.188 15:28:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.188 15:28:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.188 ************************************ 00:12:52.188 START TEST raid_rebuild_test_sb_io 00:12:52.188 ************************************ 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91261 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91261 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 91261 ']' 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.188 15:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.188 [2024-11-26 15:28:50.617786] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:52.188 [2024-11-26 15:28:50.618065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91261 ] 00:12:52.188 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:52.188 Zero copy mechanism will not be used. 00:12:52.448 [2024-11-26 15:28:50.779819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:52.448 [2024-11-26 15:28:50.817566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.448 [2024-11-26 15:28:50.842954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.448 [2024-11-26 15:28:50.885118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.448 [2024-11-26 15:28:50.885244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.019 BaseBdev1_malloc 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.019 [2024-11-26 15:28:51.471846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:53.019 [2024-11-26 15:28:51.471935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.019 [2024-11-26 15:28:51.471960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:53.019 [2024-11-26 15:28:51.471982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.019 [2024-11-26 15:28:51.474063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.019 [2024-11-26 15:28:51.474106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.019 BaseBdev1 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.019 BaseBdev2_malloc 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.019 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.019 [2024-11-26 15:28:51.492158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:53.019 [2024-11-26 15:28:51.492275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.019 [2024-11-26 15:28:51.492307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:53.019 [2024-11-26 15:28:51.492336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.280 [2024-11-26 15:28:51.494364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.280 [2024-11-26 15:28:51.494436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:53.280 BaseBdev2 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 BaseBdev3_malloc 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 [2024-11-26 15:28:51.516492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:53.280 [2024-11-26 15:28:51.516596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.280 [2024-11-26 15:28:51.516639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:53.280 [2024-11-26 15:28:51.516687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.280 [2024-11-26 15:28:51.518725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.280 [2024-11-26 15:28:51.518795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:53.280 BaseBdev3 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 BaseBdev4_malloc 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 [2024-11-26 15:28:51.561783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:53.280 [2024-11-26 15:28:51.561868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.280 [2024-11-26 15:28:51.561905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:53.280 [2024-11-26 15:28:51.561925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.280 [2024-11-26 15:28:51.565677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.280 [2024-11-26 15:28:51.565741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:53.280 BaseBdev4 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 spare_malloc 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 spare_delay 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 [2024-11-26 15:28:51.602827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:53.280 [2024-11-26 15:28:51.602922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.280 [2024-11-26 15:28:51.602965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:53.280 [2024-11-26 15:28:51.602978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.280 [2024-11-26 15:28:51.605015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.280 [2024-11-26 15:28:51.605054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:53.280 spare 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 [2024-11-26 15:28:51.614898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.280 [2024-11-26 15:28:51.616729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.280 [2024-11-26 15:28:51.616831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.280 [2024-11-26 15:28:51.616897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:53.280 [2024-11-26 15:28:51.617103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:53.280 [2024-11-26 15:28:51.617154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:53.280 [2024-11-26 15:28:51.617409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:53.280 [2024-11-26 15:28:51.617588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:53.280 [2024-11-26 15:28:51.617630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:53.280 [2024-11-26 15:28:51.617789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.280 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.280 "name": "raid_bdev1", 00:12:53.280 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:53.280 "strip_size_kb": 0, 00:12:53.281 "state": "online", 00:12:53.281 "raid_level": "raid1", 00:12:53.281 "superblock": true, 00:12:53.281 "num_base_bdevs": 4, 00:12:53.281 "num_base_bdevs_discovered": 4, 00:12:53.281 "num_base_bdevs_operational": 4, 00:12:53.281 "base_bdevs_list": [ 00:12:53.281 { 00:12:53.281 "name": "BaseBdev1", 00:12:53.281 "uuid": "ac47ac4c-c428-50cc-9c39-9fad0c6b87c4", 00:12:53.281 "is_configured": true, 00:12:53.281 "data_offset": 2048, 00:12:53.281 "data_size": 63488 00:12:53.281 }, 00:12:53.281 { 00:12:53.281 "name": "BaseBdev2", 00:12:53.281 "uuid": "58a7389b-be57-5d90-a1eb-334bacf3c7fd", 00:12:53.281 "is_configured": true, 00:12:53.281 "data_offset": 2048, 00:12:53.281 "data_size": 63488 00:12:53.281 }, 00:12:53.281 { 00:12:53.281 "name": "BaseBdev3", 00:12:53.281 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:53.281 "is_configured": true, 00:12:53.281 "data_offset": 2048, 00:12:53.281 "data_size": 63488 00:12:53.281 }, 00:12:53.281 { 00:12:53.281 "name": "BaseBdev4", 00:12:53.281 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:53.281 "is_configured": true, 00:12:53.281 "data_offset": 2048, 00:12:53.281 "data_size": 63488 00:12:53.281 } 00:12:53.281 ] 00:12:53.281 }' 00:12:53.281 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.281 15:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 [2024-11-26 15:28:52.083278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.852 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.852 [2024-11-26 15:28:52.162993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.853 "name": "raid_bdev1", 00:12:53.853 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:53.853 "strip_size_kb": 0, 00:12:53.853 "state": "online", 00:12:53.853 "raid_level": "raid1", 00:12:53.853 "superblock": true, 00:12:53.853 "num_base_bdevs": 4, 00:12:53.853 "num_base_bdevs_discovered": 3, 00:12:53.853 "num_base_bdevs_operational": 3, 00:12:53.853 "base_bdevs_list": [ 00:12:53.853 { 00:12:53.853 "name": null, 00:12:53.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.853 "is_configured": false, 00:12:53.853 "data_offset": 0, 00:12:53.853 "data_size": 63488 00:12:53.853 }, 00:12:53.853 { 00:12:53.853 "name": "BaseBdev2", 00:12:53.853 "uuid": "58a7389b-be57-5d90-a1eb-334bacf3c7fd", 00:12:53.853 "is_configured": true, 00:12:53.853 "data_offset": 2048, 00:12:53.853 "data_size": 63488 00:12:53.853 }, 00:12:53.853 { 00:12:53.853 "name": "BaseBdev3", 00:12:53.853 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:53.853 "is_configured": true, 00:12:53.853 "data_offset": 2048, 00:12:53.853 "data_size": 63488 00:12:53.853 }, 00:12:53.853 { 00:12:53.853 "name": "BaseBdev4", 00:12:53.853 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:53.853 "is_configured": true, 00:12:53.853 "data_offset": 2048, 00:12:53.853 "data_size": 63488 00:12:53.853 } 00:12:53.853 ] 00:12:53.853 }' 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.853 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.853 [2024-11-26 15:28:52.237032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:53.853 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:53.853 Zero copy mechanism will not be used. 00:12:53.853 Running I/O for 60 seconds... 00:12:54.424 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:54.424 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.424 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.424 [2024-11-26 15:28:52.644336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.424 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.424 15:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:54.424 [2024-11-26 15:28:52.685563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:54.424 [2024-11-26 15:28:52.687554] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.424 [2024-11-26 15:28:52.822734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:54.686 [2024-11-26 15:28:52.944194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:54.686 [2024-11-26 15:28:52.944472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:54.946 183.00 IOPS, 549.00 MiB/s [2024-11-26T15:28:53.425Z] [2024-11-26 15:28:53.283172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:54.946 [2024-11-26 15:28:53.284379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:55.206 [2024-11-26 15:28:53.491458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:55.206 [2024-11-26 15:28:53.491964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.468 "name": "raid_bdev1", 00:12:55.468 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:55.468 "strip_size_kb": 0, 00:12:55.468 "state": "online", 00:12:55.468 "raid_level": "raid1", 00:12:55.468 "superblock": true, 00:12:55.468 "num_base_bdevs": 4, 00:12:55.468 "num_base_bdevs_discovered": 4, 00:12:55.468 "num_base_bdevs_operational": 4, 00:12:55.468 "process": { 00:12:55.468 "type": "rebuild", 00:12:55.468 "target": "spare", 00:12:55.468 "progress": { 00:12:55.468 "blocks": 10240, 00:12:55.468 "percent": 16 00:12:55.468 } 00:12:55.468 }, 00:12:55.468 "base_bdevs_list": [ 00:12:55.468 { 00:12:55.468 "name": "spare", 00:12:55.468 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:12:55.468 "is_configured": true, 00:12:55.468 "data_offset": 2048, 00:12:55.468 "data_size": 63488 00:12:55.468 }, 00:12:55.468 { 00:12:55.468 "name": "BaseBdev2", 00:12:55.468 "uuid": "58a7389b-be57-5d90-a1eb-334bacf3c7fd", 00:12:55.468 "is_configured": true, 00:12:55.468 "data_offset": 2048, 00:12:55.468 "data_size": 63488 00:12:55.468 }, 00:12:55.468 { 00:12:55.468 "name": "BaseBdev3", 00:12:55.468 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:55.468 "is_configured": true, 00:12:55.468 "data_offset": 2048, 00:12:55.468 "data_size": 63488 00:12:55.468 }, 00:12:55.468 { 00:12:55.468 "name": "BaseBdev4", 00:12:55.468 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:55.468 "is_configured": true, 00:12:55.468 "data_offset": 2048, 00:12:55.468 "data_size": 63488 00:12:55.468 } 00:12:55.468 ] 00:12:55.468 }' 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.468 [2024-11-26 15:28:53.816597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.468 [2024-11-26 15:28:53.889742] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.468 [2024-11-26 15:28:53.899625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.468 [2024-11-26 15:28:53.899675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.468 [2024-11-26 15:28:53.899687] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.468 [2024-11-26 15:28:53.916804] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.468 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.729 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.729 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.729 "name": "raid_bdev1", 00:12:55.729 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:55.729 "strip_size_kb": 0, 00:12:55.729 "state": "online", 00:12:55.729 "raid_level": "raid1", 00:12:55.729 "superblock": true, 00:12:55.729 "num_base_bdevs": 4, 00:12:55.729 "num_base_bdevs_discovered": 3, 00:12:55.729 "num_base_bdevs_operational": 3, 00:12:55.729 "base_bdevs_list": [ 00:12:55.729 { 00:12:55.729 "name": null, 00:12:55.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.729 "is_configured": false, 00:12:55.729 "data_offset": 0, 00:12:55.729 "data_size": 63488 00:12:55.729 }, 00:12:55.729 { 00:12:55.729 "name": "BaseBdev2", 00:12:55.729 "uuid": "58a7389b-be57-5d90-a1eb-334bacf3c7fd", 00:12:55.729 "is_configured": true, 00:12:55.729 "data_offset": 2048, 00:12:55.729 "data_size": 63488 00:12:55.729 }, 00:12:55.729 { 00:12:55.729 "name": "BaseBdev3", 00:12:55.729 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:55.729 "is_configured": true, 00:12:55.729 "data_offset": 2048, 00:12:55.729 "data_size": 63488 00:12:55.729 }, 00:12:55.729 { 00:12:55.729 "name": "BaseBdev4", 00:12:55.729 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:55.729 "is_configured": true, 00:12:55.729 "data_offset": 2048, 00:12:55.729 "data_size": 63488 00:12:55.729 } 00:12:55.729 ] 00:12:55.729 }' 00:12:55.729 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.729 15:28:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.989 163.00 IOPS, 489.00 MiB/s [2024-11-26T15:28:54.468Z] 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.989 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.989 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.989 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.990 "name": "raid_bdev1", 00:12:55.990 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:55.990 "strip_size_kb": 0, 00:12:55.990 "state": "online", 00:12:55.990 "raid_level": "raid1", 00:12:55.990 "superblock": true, 00:12:55.990 "num_base_bdevs": 4, 00:12:55.990 "num_base_bdevs_discovered": 3, 00:12:55.990 "num_base_bdevs_operational": 3, 00:12:55.990 "base_bdevs_list": [ 00:12:55.990 { 00:12:55.990 "name": null, 00:12:55.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.990 "is_configured": false, 00:12:55.990 "data_offset": 0, 00:12:55.990 "data_size": 63488 00:12:55.990 }, 00:12:55.990 { 00:12:55.990 "name": "BaseBdev2", 00:12:55.990 "uuid": "58a7389b-be57-5d90-a1eb-334bacf3c7fd", 00:12:55.990 "is_configured": true, 00:12:55.990 "data_offset": 2048, 00:12:55.990 "data_size": 63488 00:12:55.990 }, 00:12:55.990 { 00:12:55.990 "name": "BaseBdev3", 00:12:55.990 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:55.990 "is_configured": true, 00:12:55.990 "data_offset": 2048, 00:12:55.990 "data_size": 63488 00:12:55.990 }, 00:12:55.990 { 00:12:55.990 "name": "BaseBdev4", 00:12:55.990 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:55.990 "is_configured": true, 00:12:55.990 "data_offset": 2048, 00:12:55.990 "data_size": 63488 00:12:55.990 } 00:12:55.990 ] 00:12:55.990 }' 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.990 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.250 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.250 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.250 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.250 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.250 [2024-11-26 15:28:54.493111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.250 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.250 15:28:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:56.250 [2024-11-26 15:28:54.534729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:12:56.250 [2024-11-26 15:28:54.536766] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.250 [2024-11-26 15:28:54.657058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:56.250 [2024-11-26 15:28:54.658239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:56.510 [2024-11-26 15:28:54.872535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:56.510 [2024-11-26 15:28:54.872726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:56.770 [2024-11-26 15:28:55.204831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:56.770 [2024-11-26 15:28:55.205197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:57.030 162.00 IOPS, 486.00 MiB/s [2024-11-26T15:28:55.509Z] [2024-11-26 15:28:55.321698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:57.030 [2024-11-26 15:28:55.321978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.290 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.290 "name": "raid_bdev1", 00:12:57.290 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:57.290 "strip_size_kb": 0, 00:12:57.290 "state": "online", 00:12:57.290 "raid_level": "raid1", 00:12:57.290 "superblock": true, 00:12:57.290 "num_base_bdevs": 4, 00:12:57.290 "num_base_bdevs_discovered": 4, 00:12:57.290 "num_base_bdevs_operational": 4, 00:12:57.290 "process": { 00:12:57.290 "type": "rebuild", 00:12:57.290 "target": "spare", 00:12:57.290 "progress": { 00:12:57.290 "blocks": 12288, 00:12:57.290 "percent": 19 00:12:57.290 } 00:12:57.290 }, 00:12:57.290 "base_bdevs_list": [ 00:12:57.290 { 00:12:57.290 "name": "spare", 00:12:57.290 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:12:57.290 "is_configured": true, 00:12:57.290 "data_offset": 2048, 00:12:57.291 "data_size": 63488 00:12:57.291 }, 00:12:57.291 { 00:12:57.291 "name": "BaseBdev2", 00:12:57.291 "uuid": "58a7389b-be57-5d90-a1eb-334bacf3c7fd", 00:12:57.291 "is_configured": true, 00:12:57.291 "data_offset": 2048, 00:12:57.291 "data_size": 63488 00:12:57.291 }, 00:12:57.291 { 00:12:57.291 "name": "BaseBdev3", 00:12:57.291 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:57.291 "is_configured": true, 00:12:57.291 "data_offset": 2048, 00:12:57.291 "data_size": 63488 00:12:57.291 }, 00:12:57.291 { 00:12:57.291 "name": "BaseBdev4", 00:12:57.291 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:57.291 "is_configured": true, 00:12:57.291 "data_offset": 2048, 00:12:57.291 "data_size": 63488 00:12:57.291 } 00:12:57.291 ] 00:12:57.291 }' 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:57.291 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:57.291 [2024-11-26 15:28:55.665374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.291 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.291 [2024-11-26 15:28:55.680235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:57.551 [2024-11-26 15:28:55.964284] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:12:57.551 [2024-11-26 15:28:55.964404] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.551 15:28:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.551 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.811 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.812 "name": "raid_bdev1", 00:12:57.812 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:57.812 "strip_size_kb": 0, 00:12:57.812 "state": "online", 00:12:57.812 "raid_level": "raid1", 00:12:57.812 "superblock": true, 00:12:57.812 "num_base_bdevs": 4, 00:12:57.812 "num_base_bdevs_discovered": 3, 00:12:57.812 "num_base_bdevs_operational": 3, 00:12:57.812 "process": { 00:12:57.812 "type": "rebuild", 00:12:57.812 "target": "spare", 00:12:57.812 "progress": { 00:12:57.812 "blocks": 18432, 00:12:57.812 "percent": 29 00:12:57.812 } 00:12:57.812 }, 00:12:57.812 "base_bdevs_list": [ 00:12:57.812 { 00:12:57.812 "name": "spare", 00:12:57.812 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:12:57.812 "is_configured": true, 00:12:57.812 "data_offset": 2048, 00:12:57.812 "data_size": 63488 00:12:57.812 }, 00:12:57.812 { 00:12:57.812 "name": null, 00:12:57.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.812 "is_configured": false, 00:12:57.812 "data_offset": 0, 00:12:57.812 "data_size": 63488 00:12:57.812 }, 00:12:57.812 { 00:12:57.812 "name": "BaseBdev3", 00:12:57.812 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:57.812 "is_configured": true, 00:12:57.812 "data_offset": 2048, 00:12:57.812 "data_size": 63488 00:12:57.812 }, 00:12:57.812 { 00:12:57.812 "name": "BaseBdev4", 00:12:57.812 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:57.812 "is_configured": true, 00:12:57.812 "data_offset": 2048, 00:12:57.812 "data_size": 63488 00:12:57.812 } 00:12:57.812 ] 00:12:57.812 }' 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.812 [2024-11-26 15:28:56.075344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.812 "name": "raid_bdev1", 00:12:57.812 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:57.812 "strip_size_kb": 0, 00:12:57.812 "state": "online", 00:12:57.812 "raid_level": "raid1", 00:12:57.812 "superblock": true, 00:12:57.812 "num_base_bdevs": 4, 00:12:57.812 "num_base_bdevs_discovered": 3, 00:12:57.812 "num_base_bdevs_operational": 3, 00:12:57.812 "process": { 00:12:57.812 "type": "rebuild", 00:12:57.812 "target": "spare", 00:12:57.812 "progress": { 00:12:57.812 "blocks": 20480, 00:12:57.812 "percent": 32 00:12:57.812 } 00:12:57.812 }, 00:12:57.812 "base_bdevs_list": [ 00:12:57.812 { 00:12:57.812 "name": "spare", 00:12:57.812 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:12:57.812 "is_configured": true, 00:12:57.812 "data_offset": 2048, 00:12:57.812 "data_size": 63488 00:12:57.812 }, 00:12:57.812 { 00:12:57.812 "name": null, 00:12:57.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.812 "is_configured": false, 00:12:57.812 "data_offset": 0, 00:12:57.812 "data_size": 63488 00:12:57.812 }, 00:12:57.812 { 00:12:57.812 "name": "BaseBdev3", 00:12:57.812 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:57.812 "is_configured": true, 00:12:57.812 "data_offset": 2048, 00:12:57.812 "data_size": 63488 00:12:57.812 }, 00:12:57.812 { 00:12:57.812 "name": "BaseBdev4", 00:12:57.812 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:57.812 "is_configured": true, 00:12:57.812 "data_offset": 2048, 00:12:57.812 "data_size": 63488 00:12:57.812 } 00:12:57.812 ] 00:12:57.812 }' 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.812 140.50 IOPS, 421.50 MiB/s [2024-11-26T15:28:56.291Z] 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.812 15:28:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.072 [2024-11-26 15:28:56.290030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:58.072 [2024-11-26 15:28:56.290403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:58.072 [2024-11-26 15:28:56.507585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:58.072 [2024-11-26 15:28:56.508463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:58.332 [2024-11-26 15:28:56.718479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:58.591 [2024-11-26 15:28:56.932169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:58.850 127.20 IOPS, 381.60 MiB/s [2024-11-26T15:28:57.329Z] 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.850 "name": "raid_bdev1", 00:12:58.850 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:12:58.850 "strip_size_kb": 0, 00:12:58.850 "state": "online", 00:12:58.850 "raid_level": "raid1", 00:12:58.850 "superblock": true, 00:12:58.850 "num_base_bdevs": 4, 00:12:58.850 "num_base_bdevs_discovered": 3, 00:12:58.850 "num_base_bdevs_operational": 3, 00:12:58.850 "process": { 00:12:58.850 "type": "rebuild", 00:12:58.850 "target": "spare", 00:12:58.850 "progress": { 00:12:58.850 "blocks": 38912, 00:12:58.850 "percent": 61 00:12:58.850 } 00:12:58.850 }, 00:12:58.850 "base_bdevs_list": [ 00:12:58.850 { 00:12:58.850 "name": "spare", 00:12:58.850 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:12:58.850 "is_configured": true, 00:12:58.850 "data_offset": 2048, 00:12:58.850 "data_size": 63488 00:12:58.850 }, 00:12:58.850 { 00:12:58.850 "name": null, 00:12:58.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.850 "is_configured": false, 00:12:58.850 "data_offset": 0, 00:12:58.850 "data_size": 63488 00:12:58.850 }, 00:12:58.850 { 00:12:58.850 "name": "BaseBdev3", 00:12:58.850 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:12:58.850 "is_configured": true, 00:12:58.850 "data_offset": 2048, 00:12:58.850 "data_size": 63488 00:12:58.850 }, 00:12:58.850 { 00:12:58.850 "name": "BaseBdev4", 00:12:58.850 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:12:58.850 "is_configured": true, 00:12:58.850 "data_offset": 2048, 00:12:58.850 "data_size": 63488 00:12:58.850 } 00:12:58.850 ] 00:12:58.850 }' 00:12:58.850 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.111 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.111 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.111 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.111 15:28:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.682 [2024-11-26 15:28:58.027450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:59.942 111.67 IOPS, 335.00 MiB/s [2024-11-26T15:28:58.421Z] 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.942 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.202 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.202 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.202 "name": "raid_bdev1", 00:13:00.202 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:00.202 "strip_size_kb": 0, 00:13:00.202 "state": "online", 00:13:00.202 "raid_level": "raid1", 00:13:00.202 "superblock": true, 00:13:00.202 "num_base_bdevs": 4, 00:13:00.202 "num_base_bdevs_discovered": 3, 00:13:00.202 "num_base_bdevs_operational": 3, 00:13:00.202 "process": { 00:13:00.202 "type": "rebuild", 00:13:00.202 "target": "spare", 00:13:00.202 "progress": { 00:13:00.202 "blocks": 57344, 00:13:00.202 "percent": 90 00:13:00.202 } 00:13:00.202 }, 00:13:00.202 "base_bdevs_list": [ 00:13:00.202 { 00:13:00.202 "name": "spare", 00:13:00.202 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:00.202 "is_configured": true, 00:13:00.202 "data_offset": 2048, 00:13:00.202 "data_size": 63488 00:13:00.202 }, 00:13:00.202 { 00:13:00.202 "name": null, 00:13:00.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.202 "is_configured": false, 00:13:00.202 "data_offset": 0, 00:13:00.202 "data_size": 63488 00:13:00.202 }, 00:13:00.202 { 00:13:00.202 "name": "BaseBdev3", 00:13:00.202 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:00.202 "is_configured": true, 00:13:00.202 "data_offset": 2048, 00:13:00.202 "data_size": 63488 00:13:00.202 }, 00:13:00.202 { 00:13:00.202 "name": "BaseBdev4", 00:13:00.202 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:00.202 "is_configured": true, 00:13:00.202 "data_offset": 2048, 00:13:00.202 "data_size": 63488 00:13:00.202 } 00:13:00.202 ] 00:13:00.202 }' 00:13:00.202 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.202 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.202 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.202 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.202 15:28:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.202 [2024-11-26 15:28:58.669206] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:00.462 [2024-11-26 15:28:58.769172] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:00.462 [2024-11-26 15:28:58.771426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.292 101.71 IOPS, 305.14 MiB/s [2024-11-26T15:28:59.771Z] 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.292 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.292 "name": "raid_bdev1", 00:13:01.292 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:01.292 "strip_size_kb": 0, 00:13:01.292 "state": "online", 00:13:01.292 "raid_level": "raid1", 00:13:01.292 "superblock": true, 00:13:01.292 "num_base_bdevs": 4, 00:13:01.292 "num_base_bdevs_discovered": 3, 00:13:01.292 "num_base_bdevs_operational": 3, 00:13:01.292 "base_bdevs_list": [ 00:13:01.292 { 00:13:01.292 "name": "spare", 00:13:01.292 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:01.292 "is_configured": true, 00:13:01.292 "data_offset": 2048, 00:13:01.292 "data_size": 63488 00:13:01.292 }, 00:13:01.292 { 00:13:01.292 "name": null, 00:13:01.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.292 "is_configured": false, 00:13:01.292 "data_offset": 0, 00:13:01.292 "data_size": 63488 00:13:01.292 }, 00:13:01.292 { 00:13:01.292 "name": "BaseBdev3", 00:13:01.292 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:01.292 "is_configured": true, 00:13:01.292 "data_offset": 2048, 00:13:01.292 "data_size": 63488 00:13:01.292 }, 00:13:01.293 { 00:13:01.293 "name": "BaseBdev4", 00:13:01.293 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:01.293 "is_configured": true, 00:13:01.293 "data_offset": 2048, 00:13:01.293 "data_size": 63488 00:13:01.293 } 00:13:01.293 ] 00:13:01.293 }' 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.293 "name": "raid_bdev1", 00:13:01.293 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:01.293 "strip_size_kb": 0, 00:13:01.293 "state": "online", 00:13:01.293 "raid_level": "raid1", 00:13:01.293 "superblock": true, 00:13:01.293 "num_base_bdevs": 4, 00:13:01.293 "num_base_bdevs_discovered": 3, 00:13:01.293 "num_base_bdevs_operational": 3, 00:13:01.293 "base_bdevs_list": [ 00:13:01.293 { 00:13:01.293 "name": "spare", 00:13:01.293 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:01.293 "is_configured": true, 00:13:01.293 "data_offset": 2048, 00:13:01.293 "data_size": 63488 00:13:01.293 }, 00:13:01.293 { 00:13:01.293 "name": null, 00:13:01.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.293 "is_configured": false, 00:13:01.293 "data_offset": 0, 00:13:01.293 "data_size": 63488 00:13:01.293 }, 00:13:01.293 { 00:13:01.293 "name": "BaseBdev3", 00:13:01.293 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:01.293 "is_configured": true, 00:13:01.293 "data_offset": 2048, 00:13:01.293 "data_size": 63488 00:13:01.293 }, 00:13:01.293 { 00:13:01.293 "name": "BaseBdev4", 00:13:01.293 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:01.293 "is_configured": true, 00:13:01.293 "data_offset": 2048, 00:13:01.293 "data_size": 63488 00:13:01.293 } 00:13:01.293 ] 00:13:01.293 }' 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.293 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.553 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.553 "name": "raid_bdev1", 00:13:01.553 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:01.553 "strip_size_kb": 0, 00:13:01.553 "state": "online", 00:13:01.553 "raid_level": "raid1", 00:13:01.553 "superblock": true, 00:13:01.553 "num_base_bdevs": 4, 00:13:01.553 "num_base_bdevs_discovered": 3, 00:13:01.553 "num_base_bdevs_operational": 3, 00:13:01.553 "base_bdevs_list": [ 00:13:01.553 { 00:13:01.553 "name": "spare", 00:13:01.553 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:01.553 "is_configured": true, 00:13:01.553 "data_offset": 2048, 00:13:01.553 "data_size": 63488 00:13:01.553 }, 00:13:01.553 { 00:13:01.553 "name": null, 00:13:01.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.553 "is_configured": false, 00:13:01.553 "data_offset": 0, 00:13:01.553 "data_size": 63488 00:13:01.553 }, 00:13:01.553 { 00:13:01.553 "name": "BaseBdev3", 00:13:01.553 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:01.554 "is_configured": true, 00:13:01.554 "data_offset": 2048, 00:13:01.554 "data_size": 63488 00:13:01.554 }, 00:13:01.554 { 00:13:01.554 "name": "BaseBdev4", 00:13:01.554 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:01.554 "is_configured": true, 00:13:01.554 "data_offset": 2048, 00:13:01.554 "data_size": 63488 00:13:01.554 } 00:13:01.554 ] 00:13:01.554 }' 00:13:01.554 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.554 15:28:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.815 [2024-11-26 15:29:00.238813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.815 [2024-11-26 15:29:00.238912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.815 92.62 IOPS, 277.88 MiB/s 00:13:01.815 Latency(us) 00:13:01.815 [2024-11-26T15:29:00.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.815 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:01.815 raid_bdev1 : 8.03 92.47 277.41 0.00 0.00 14294.52 260.62 106932.27 00:13:01.815 [2024-11-26T15:29:00.294Z] =================================================================================================================== 00:13:01.815 [2024-11-26T15:29:00.294Z] Total : 92.47 277.41 0.00 0.00 14294.52 260.62 106932.27 00:13:01.815 [2024-11-26 15:29:00.278544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.815 [2024-11-26 15:29:00.278646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.815 [2024-11-26 15:29:00.278763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:13:01.815 "results": [ 00:13:01.815 { 00:13:01.815 "job": "raid_bdev1", 00:13:01.815 "core_mask": "0x1", 00:13:01.815 "workload": "randrw", 00:13:01.815 "percentage": 50, 00:13:01.815 "status": "finished", 00:13:01.815 "queue_depth": 2, 00:13:01.815 "io_size": 3145728, 00:13:01.815 "runtime": 8.034955, 00:13:01.815 "iops": 92.4709597004588, 00:13:01.815 "mibps": 277.4128791013764, 00:13:01.815 "io_failed": 0, 00:13:01.815 "io_timeout": 0, 00:13:01.815 "avg_latency_us": 14294.521246500795, 00:13:01.815 "min_latency_us": 260.6188442430053, 00:13:01.815 "max_latency_us": 106932.26880502049 00:13:01.815 } 00:13:01.815 ], 00:13:01.815 "core_count": 1 00:13:01.815 } 00:13:01.815 ee all in destruct 00:13:01.815 [2024-11-26 15:29:00.278803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.815 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.076 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:02.336 /dev/nbd0 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.336 1+0 records in 00:13:02.336 1+0 records out 00:13:02.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268684 s, 15.2 MB/s 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:02.336 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.337 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:02.597 /dev/nbd1 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.597 1+0 records in 00:13:02.597 1+0 records out 00:13:02.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353543 s, 11.6 MB/s 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.597 15:29:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.867 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:03.141 /dev/nbd1 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:03.141 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.141 1+0 records in 00:13:03.141 1+0 records out 00:13:03.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521369 s, 7.9 MB/s 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.142 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.401 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 [2024-11-26 15:29:01.911874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.661 [2024-11-26 15:29:01.911986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.661 [2024-11-26 15:29:01.912020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:03.661 [2024-11-26 15:29:01.912048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.661 [2024-11-26 15:29:01.914199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.661 [2024-11-26 15:29:01.914285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.661 [2024-11-26 15:29:01.914388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:03.661 [2024-11-26 15:29:01.914455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.661 [2024-11-26 15:29:01.914610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.661 [2024-11-26 15:29:01.914739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.661 spare 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.661 15:29:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.661 [2024-11-26 15:29:02.014828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:03.661 [2024-11-26 15:29:02.014891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:03.661 [2024-11-26 15:29:02.015206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:13:03.661 [2024-11-26 15:29:02.015387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:03.661 [2024-11-26 15:29:02.015436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:03.661 [2024-11-26 15:29:02.015595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.661 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.662 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.662 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.662 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.662 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.662 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.662 "name": "raid_bdev1", 00:13:03.662 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:03.662 "strip_size_kb": 0, 00:13:03.662 "state": "online", 00:13:03.662 "raid_level": "raid1", 00:13:03.662 "superblock": true, 00:13:03.662 "num_base_bdevs": 4, 00:13:03.662 "num_base_bdevs_discovered": 3, 00:13:03.662 "num_base_bdevs_operational": 3, 00:13:03.662 "base_bdevs_list": [ 00:13:03.662 { 00:13:03.662 "name": "spare", 00:13:03.662 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:03.662 "is_configured": true, 00:13:03.662 "data_offset": 2048, 00:13:03.662 "data_size": 63488 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": null, 00:13:03.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.662 "is_configured": false, 00:13:03.662 "data_offset": 2048, 00:13:03.662 "data_size": 63488 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": "BaseBdev3", 00:13:03.662 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:03.662 "is_configured": true, 00:13:03.662 "data_offset": 2048, 00:13:03.662 "data_size": 63488 00:13:03.662 }, 00:13:03.662 { 00:13:03.662 "name": "BaseBdev4", 00:13:03.662 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:03.662 "is_configured": true, 00:13:03.662 "data_offset": 2048, 00:13:03.662 "data_size": 63488 00:13:03.662 } 00:13:03.662 ] 00:13:03.662 }' 00:13:03.662 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.662 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.231 "name": "raid_bdev1", 00:13:04.231 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:04.231 "strip_size_kb": 0, 00:13:04.231 "state": "online", 00:13:04.231 "raid_level": "raid1", 00:13:04.231 "superblock": true, 00:13:04.231 "num_base_bdevs": 4, 00:13:04.231 "num_base_bdevs_discovered": 3, 00:13:04.231 "num_base_bdevs_operational": 3, 00:13:04.231 "base_bdevs_list": [ 00:13:04.231 { 00:13:04.231 "name": "spare", 00:13:04.231 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:04.231 "is_configured": true, 00:13:04.231 "data_offset": 2048, 00:13:04.231 "data_size": 63488 00:13:04.231 }, 00:13:04.231 { 00:13:04.231 "name": null, 00:13:04.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.231 "is_configured": false, 00:13:04.231 "data_offset": 2048, 00:13:04.231 "data_size": 63488 00:13:04.231 }, 00:13:04.231 { 00:13:04.231 "name": "BaseBdev3", 00:13:04.231 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:04.231 "is_configured": true, 00:13:04.231 "data_offset": 2048, 00:13:04.231 "data_size": 63488 00:13:04.231 }, 00:13:04.231 { 00:13:04.231 "name": "BaseBdev4", 00:13:04.231 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:04.231 "is_configured": true, 00:13:04.231 "data_offset": 2048, 00:13:04.231 "data_size": 63488 00:13:04.231 } 00:13:04.231 ] 00:13:04.231 }' 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.231 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.232 [2024-11-26 15:29:02.676162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.232 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.492 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.492 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.492 "name": "raid_bdev1", 00:13:04.492 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:04.492 "strip_size_kb": 0, 00:13:04.492 "state": "online", 00:13:04.492 "raid_level": "raid1", 00:13:04.492 "superblock": true, 00:13:04.492 "num_base_bdevs": 4, 00:13:04.492 "num_base_bdevs_discovered": 2, 00:13:04.492 "num_base_bdevs_operational": 2, 00:13:04.492 "base_bdevs_list": [ 00:13:04.492 { 00:13:04.492 "name": null, 00:13:04.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.492 "is_configured": false, 00:13:04.492 "data_offset": 0, 00:13:04.492 "data_size": 63488 00:13:04.492 }, 00:13:04.492 { 00:13:04.492 "name": null, 00:13:04.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.492 "is_configured": false, 00:13:04.492 "data_offset": 2048, 00:13:04.492 "data_size": 63488 00:13:04.492 }, 00:13:04.492 { 00:13:04.492 "name": "BaseBdev3", 00:13:04.492 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 2048, 00:13:04.492 "data_size": 63488 00:13:04.492 }, 00:13:04.492 { 00:13:04.492 "name": "BaseBdev4", 00:13:04.492 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:04.492 "is_configured": true, 00:13:04.492 "data_offset": 2048, 00:13:04.492 "data_size": 63488 00:13:04.492 } 00:13:04.492 ] 00:13:04.492 }' 00:13:04.492 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.492 15:29:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.752 15:29:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:04.752 15:29:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.752 15:29:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.752 [2024-11-26 15:29:03.132380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.752 [2024-11-26 15:29:03.132551] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:04.752 [2024-11-26 15:29:03.132566] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:04.752 [2024-11-26 15:29:03.132616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.752 [2024-11-26 15:29:03.136949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037640 00:13:04.752 15:29:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.752 15:29:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:04.752 [2024-11-26 15:29:03.138843] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.694 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.955 "name": "raid_bdev1", 00:13:05.955 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:05.955 "strip_size_kb": 0, 00:13:05.955 "state": "online", 00:13:05.955 "raid_level": "raid1", 00:13:05.955 "superblock": true, 00:13:05.955 "num_base_bdevs": 4, 00:13:05.955 "num_base_bdevs_discovered": 3, 00:13:05.955 "num_base_bdevs_operational": 3, 00:13:05.955 "process": { 00:13:05.955 "type": "rebuild", 00:13:05.955 "target": "spare", 00:13:05.955 "progress": { 00:13:05.955 "blocks": 20480, 00:13:05.955 "percent": 32 00:13:05.955 } 00:13:05.955 }, 00:13:05.955 "base_bdevs_list": [ 00:13:05.955 { 00:13:05.955 "name": "spare", 00:13:05.955 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:05.955 "is_configured": true, 00:13:05.955 "data_offset": 2048, 00:13:05.955 "data_size": 63488 00:13:05.955 }, 00:13:05.955 { 00:13:05.955 "name": null, 00:13:05.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.955 "is_configured": false, 00:13:05.955 "data_offset": 2048, 00:13:05.955 "data_size": 63488 00:13:05.955 }, 00:13:05.955 { 00:13:05.955 "name": "BaseBdev3", 00:13:05.955 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:05.955 "is_configured": true, 00:13:05.955 "data_offset": 2048, 00:13:05.955 "data_size": 63488 00:13:05.955 }, 00:13:05.955 { 00:13:05.955 "name": "BaseBdev4", 00:13:05.955 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:05.955 "is_configured": true, 00:13:05.955 "data_offset": 2048, 00:13:05.955 "data_size": 63488 00:13:05.955 } 00:13:05.955 ] 00:13:05.955 }' 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.955 [2024-11-26 15:29:04.297971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.955 [2024-11-26 15:29:04.344855] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:05.955 [2024-11-26 15:29:04.344979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.955 [2024-11-26 15:29:04.345017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.955 [2024-11-26 15:29:04.345040] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.955 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.955 "name": "raid_bdev1", 00:13:05.955 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:05.955 "strip_size_kb": 0, 00:13:05.955 "state": "online", 00:13:05.955 "raid_level": "raid1", 00:13:05.955 "superblock": true, 00:13:05.955 "num_base_bdevs": 4, 00:13:05.955 "num_base_bdevs_discovered": 2, 00:13:05.955 "num_base_bdevs_operational": 2, 00:13:05.955 "base_bdevs_list": [ 00:13:05.955 { 00:13:05.955 "name": null, 00:13:05.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.955 "is_configured": false, 00:13:05.955 "data_offset": 0, 00:13:05.955 "data_size": 63488 00:13:05.956 }, 00:13:05.956 { 00:13:05.956 "name": null, 00:13:05.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.956 "is_configured": false, 00:13:05.956 "data_offset": 2048, 00:13:05.956 "data_size": 63488 00:13:05.956 }, 00:13:05.956 { 00:13:05.956 "name": "BaseBdev3", 00:13:05.956 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:05.956 "is_configured": true, 00:13:05.956 "data_offset": 2048, 00:13:05.956 "data_size": 63488 00:13:05.956 }, 00:13:05.956 { 00:13:05.956 "name": "BaseBdev4", 00:13:05.956 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:05.956 "is_configured": true, 00:13:05.956 "data_offset": 2048, 00:13:05.956 "data_size": 63488 00:13:05.956 } 00:13:05.956 ] 00:13:05.956 }' 00:13:05.956 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.956 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.526 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.526 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.526 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.526 [2024-11-26 15:29:04.825648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.526 [2024-11-26 15:29:04.825752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.526 [2024-11-26 15:29:04.825788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:06.526 [2024-11-26 15:29:04.825817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.526 [2024-11-26 15:29:04.826315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.526 [2024-11-26 15:29:04.826360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.526 [2024-11-26 15:29:04.826448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:06.526 [2024-11-26 15:29:04.826463] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:06.526 [2024-11-26 15:29:04.826472] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:06.526 [2024-11-26 15:29:04.826493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.526 spare 00:13:06.526 [2024-11-26 15:29:04.830876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037710 00:13:06.526 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.526 15:29:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:06.526 [2024-11-26 15:29:04.832741] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.466 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.466 "name": "raid_bdev1", 00:13:07.466 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:07.466 "strip_size_kb": 0, 00:13:07.466 "state": "online", 00:13:07.466 "raid_level": "raid1", 00:13:07.466 "superblock": true, 00:13:07.466 "num_base_bdevs": 4, 00:13:07.466 "num_base_bdevs_discovered": 3, 00:13:07.466 "num_base_bdevs_operational": 3, 00:13:07.466 "process": { 00:13:07.466 "type": "rebuild", 00:13:07.466 "target": "spare", 00:13:07.466 "progress": { 00:13:07.466 "blocks": 20480, 00:13:07.466 "percent": 32 00:13:07.466 } 00:13:07.466 }, 00:13:07.466 "base_bdevs_list": [ 00:13:07.466 { 00:13:07.466 "name": "spare", 00:13:07.466 "uuid": "8eae4776-ae00-5e90-89a8-eb1b5f961975", 00:13:07.466 "is_configured": true, 00:13:07.466 "data_offset": 2048, 00:13:07.466 "data_size": 63488 00:13:07.466 }, 00:13:07.466 { 00:13:07.466 "name": null, 00:13:07.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.467 "is_configured": false, 00:13:07.467 "data_offset": 2048, 00:13:07.467 "data_size": 63488 00:13:07.467 }, 00:13:07.467 { 00:13:07.467 "name": "BaseBdev3", 00:13:07.467 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:07.467 "is_configured": true, 00:13:07.467 "data_offset": 2048, 00:13:07.467 "data_size": 63488 00:13:07.467 }, 00:13:07.467 { 00:13:07.467 "name": "BaseBdev4", 00:13:07.467 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:07.467 "is_configured": true, 00:13:07.467 "data_offset": 2048, 00:13:07.467 "data_size": 63488 00:13:07.467 } 00:13:07.467 ] 00:13:07.467 }' 00:13:07.467 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.467 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.727 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.727 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.727 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:07.727 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.727 15:29:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.727 [2024-11-26 15:29:05.998493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.727 [2024-11-26 15:29:06.038843] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:07.727 [2024-11-26 15:29:06.038894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.727 [2024-11-26 15:29:06.038927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.727 [2024-11-26 15:29:06.038934] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.727 "name": "raid_bdev1", 00:13:07.727 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:07.727 "strip_size_kb": 0, 00:13:07.727 "state": "online", 00:13:07.727 "raid_level": "raid1", 00:13:07.727 "superblock": true, 00:13:07.727 "num_base_bdevs": 4, 00:13:07.727 "num_base_bdevs_discovered": 2, 00:13:07.727 "num_base_bdevs_operational": 2, 00:13:07.727 "base_bdevs_list": [ 00:13:07.727 { 00:13:07.727 "name": null, 00:13:07.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.727 "is_configured": false, 00:13:07.727 "data_offset": 0, 00:13:07.727 "data_size": 63488 00:13:07.727 }, 00:13:07.727 { 00:13:07.727 "name": null, 00:13:07.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.727 "is_configured": false, 00:13:07.727 "data_offset": 2048, 00:13:07.727 "data_size": 63488 00:13:07.727 }, 00:13:07.727 { 00:13:07.727 "name": "BaseBdev3", 00:13:07.727 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:07.727 "is_configured": true, 00:13:07.727 "data_offset": 2048, 00:13:07.727 "data_size": 63488 00:13:07.727 }, 00:13:07.727 { 00:13:07.727 "name": "BaseBdev4", 00:13:07.727 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:07.727 "is_configured": true, 00:13:07.727 "data_offset": 2048, 00:13:07.727 "data_size": 63488 00:13:07.727 } 00:13:07.727 ] 00:13:07.727 }' 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.727 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.297 "name": "raid_bdev1", 00:13:08.297 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:08.297 "strip_size_kb": 0, 00:13:08.297 "state": "online", 00:13:08.297 "raid_level": "raid1", 00:13:08.297 "superblock": true, 00:13:08.297 "num_base_bdevs": 4, 00:13:08.297 "num_base_bdevs_discovered": 2, 00:13:08.297 "num_base_bdevs_operational": 2, 00:13:08.297 "base_bdevs_list": [ 00:13:08.297 { 00:13:08.297 "name": null, 00:13:08.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.297 "is_configured": false, 00:13:08.297 "data_offset": 0, 00:13:08.297 "data_size": 63488 00:13:08.297 }, 00:13:08.297 { 00:13:08.297 "name": null, 00:13:08.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.297 "is_configured": false, 00:13:08.297 "data_offset": 2048, 00:13:08.297 "data_size": 63488 00:13:08.297 }, 00:13:08.297 { 00:13:08.297 "name": "BaseBdev3", 00:13:08.297 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:08.297 "is_configured": true, 00:13:08.297 "data_offset": 2048, 00:13:08.297 "data_size": 63488 00:13:08.297 }, 00:13:08.297 { 00:13:08.297 "name": "BaseBdev4", 00:13:08.297 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:08.297 "is_configured": true, 00:13:08.297 "data_offset": 2048, 00:13:08.297 "data_size": 63488 00:13:08.297 } 00:13:08.297 ] 00:13:08.297 }' 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.297 [2024-11-26 15:29:06.627609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:08.297 [2024-11-26 15:29:06.627701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.297 [2024-11-26 15:29:06.627757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:08.297 [2024-11-26 15:29:06.627767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.297 [2024-11-26 15:29:06.628159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.297 [2024-11-26 15:29:06.628177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.297 [2024-11-26 15:29:06.628255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:08.297 [2024-11-26 15:29:06.628273] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:08.297 [2024-11-26 15:29:06.628283] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:08.297 [2024-11-26 15:29:06.628292] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:08.297 BaseBdev1 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.297 15:29:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.238 "name": "raid_bdev1", 00:13:09.238 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:09.238 "strip_size_kb": 0, 00:13:09.238 "state": "online", 00:13:09.238 "raid_level": "raid1", 00:13:09.238 "superblock": true, 00:13:09.238 "num_base_bdevs": 4, 00:13:09.238 "num_base_bdevs_discovered": 2, 00:13:09.238 "num_base_bdevs_operational": 2, 00:13:09.238 "base_bdevs_list": [ 00:13:09.238 { 00:13:09.238 "name": null, 00:13:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.238 "is_configured": false, 00:13:09.238 "data_offset": 0, 00:13:09.238 "data_size": 63488 00:13:09.238 }, 00:13:09.238 { 00:13:09.238 "name": null, 00:13:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.238 "is_configured": false, 00:13:09.238 "data_offset": 2048, 00:13:09.238 "data_size": 63488 00:13:09.238 }, 00:13:09.238 { 00:13:09.238 "name": "BaseBdev3", 00:13:09.238 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:09.238 "is_configured": true, 00:13:09.238 "data_offset": 2048, 00:13:09.238 "data_size": 63488 00:13:09.238 }, 00:13:09.238 { 00:13:09.238 "name": "BaseBdev4", 00:13:09.238 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:09.238 "is_configured": true, 00:13:09.238 "data_offset": 2048, 00:13:09.238 "data_size": 63488 00:13:09.238 } 00:13:09.238 ] 00:13:09.238 }' 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.238 15:29:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.809 "name": "raid_bdev1", 00:13:09.809 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:09.809 "strip_size_kb": 0, 00:13:09.809 "state": "online", 00:13:09.809 "raid_level": "raid1", 00:13:09.809 "superblock": true, 00:13:09.809 "num_base_bdevs": 4, 00:13:09.809 "num_base_bdevs_discovered": 2, 00:13:09.809 "num_base_bdevs_operational": 2, 00:13:09.809 "base_bdevs_list": [ 00:13:09.809 { 00:13:09.809 "name": null, 00:13:09.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.809 "is_configured": false, 00:13:09.809 "data_offset": 0, 00:13:09.809 "data_size": 63488 00:13:09.809 }, 00:13:09.809 { 00:13:09.809 "name": null, 00:13:09.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.809 "is_configured": false, 00:13:09.809 "data_offset": 2048, 00:13:09.809 "data_size": 63488 00:13:09.809 }, 00:13:09.809 { 00:13:09.809 "name": "BaseBdev3", 00:13:09.809 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:09.809 "is_configured": true, 00:13:09.809 "data_offset": 2048, 00:13:09.809 "data_size": 63488 00:13:09.809 }, 00:13:09.809 { 00:13:09.809 "name": "BaseBdev4", 00:13:09.809 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:09.809 "is_configured": true, 00:13:09.809 "data_offset": 2048, 00:13:09.809 "data_size": 63488 00:13:09.809 } 00:13:09.809 ] 00:13:09.809 }' 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.809 [2024-11-26 15:29:08.228298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.809 [2024-11-26 15:29:08.228490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:09.809 [2024-11-26 15:29:08.228545] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:09.809 request: 00:13:09.809 { 00:13:09.809 "base_bdev": "BaseBdev1", 00:13:09.809 "raid_bdev": "raid_bdev1", 00:13:09.809 "method": "bdev_raid_add_base_bdev", 00:13:09.809 "req_id": 1 00:13:09.809 } 00:13:09.809 Got JSON-RPC error response 00:13:09.809 response: 00:13:09.809 { 00:13:09.809 "code": -22, 00:13:09.809 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:09.809 } 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:09.809 15:29:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.192 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.192 "name": "raid_bdev1", 00:13:11.192 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:11.192 "strip_size_kb": 0, 00:13:11.192 "state": "online", 00:13:11.192 "raid_level": "raid1", 00:13:11.192 "superblock": true, 00:13:11.192 "num_base_bdevs": 4, 00:13:11.192 "num_base_bdevs_discovered": 2, 00:13:11.192 "num_base_bdevs_operational": 2, 00:13:11.192 "base_bdevs_list": [ 00:13:11.192 { 00:13:11.192 "name": null, 00:13:11.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.192 "is_configured": false, 00:13:11.192 "data_offset": 0, 00:13:11.192 "data_size": 63488 00:13:11.192 }, 00:13:11.192 { 00:13:11.192 "name": null, 00:13:11.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.192 "is_configured": false, 00:13:11.192 "data_offset": 2048, 00:13:11.193 "data_size": 63488 00:13:11.193 }, 00:13:11.193 { 00:13:11.193 "name": "BaseBdev3", 00:13:11.193 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:11.193 "is_configured": true, 00:13:11.193 "data_offset": 2048, 00:13:11.193 "data_size": 63488 00:13:11.193 }, 00:13:11.193 { 00:13:11.193 "name": "BaseBdev4", 00:13:11.193 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:11.193 "is_configured": true, 00:13:11.193 "data_offset": 2048, 00:13:11.193 "data_size": 63488 00:13:11.193 } 00:13:11.193 ] 00:13:11.193 }' 00:13:11.193 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.193 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.453 "name": "raid_bdev1", 00:13:11.453 "uuid": "e90d4e8c-c8a3-4565-a1d4-8c2927ffaece", 00:13:11.453 "strip_size_kb": 0, 00:13:11.453 "state": "online", 00:13:11.453 "raid_level": "raid1", 00:13:11.453 "superblock": true, 00:13:11.453 "num_base_bdevs": 4, 00:13:11.453 "num_base_bdevs_discovered": 2, 00:13:11.453 "num_base_bdevs_operational": 2, 00:13:11.453 "base_bdevs_list": [ 00:13:11.453 { 00:13:11.453 "name": null, 00:13:11.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.453 "is_configured": false, 00:13:11.453 "data_offset": 0, 00:13:11.453 "data_size": 63488 00:13:11.453 }, 00:13:11.453 { 00:13:11.453 "name": null, 00:13:11.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.453 "is_configured": false, 00:13:11.453 "data_offset": 2048, 00:13:11.453 "data_size": 63488 00:13:11.453 }, 00:13:11.453 { 00:13:11.453 "name": "BaseBdev3", 00:13:11.453 "uuid": "52d0567a-c26e-5fe4-925a-2c42ac46a512", 00:13:11.453 "is_configured": true, 00:13:11.453 "data_offset": 2048, 00:13:11.453 "data_size": 63488 00:13:11.453 }, 00:13:11.453 { 00:13:11.453 "name": "BaseBdev4", 00:13:11.453 "uuid": "378ad0a6-00ba-5681-9aa1-a2c80c6da919", 00:13:11.453 "is_configured": true, 00:13:11.453 "data_offset": 2048, 00:13:11.453 "data_size": 63488 00:13:11.453 } 00:13:11.453 ] 00:13:11.453 }' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91261 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 91261 ']' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 91261 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91261 00:13:11.453 killing process with pid 91261 00:13:11.453 Received shutdown signal, test time was about 17.641142 seconds 00:13:11.453 00:13:11.453 Latency(us) 00:13:11.453 [2024-11-26T15:29:09.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.453 [2024-11-26T15:29:09.932Z] =================================================================================================================== 00:13:11.453 [2024-11-26T15:29:09.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91261' 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 91261 00:13:11.453 [2024-11-26 15:29:09.881645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.453 15:29:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 91261 00:13:11.453 [2024-11-26 15:29:09.881779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.453 [2024-11-26 15:29:09.881878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.453 [2024-11-26 15:29:09.881898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.714 [2024-11-26 15:29:09.928805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.714 15:29:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:11.714 00:13:11.714 real 0m19.645s 00:13:11.714 user 0m26.209s 00:13:11.714 sys 0m2.512s 00:13:11.714 15:29:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.714 15:29:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.714 ************************************ 00:13:11.714 END TEST raid_rebuild_test_sb_io 00:13:11.714 ************************************ 00:13:11.976 15:29:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:11.976 15:29:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:11.976 15:29:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:11.976 15:29:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.976 15:29:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.976 ************************************ 00:13:11.976 START TEST raid5f_state_function_test 00:13:11.976 ************************************ 00:13:11.976 15:29:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:13:11.976 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=91966 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:11.977 Process raid pid: 91966 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91966' 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 91966 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 91966 ']' 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.977 15:29:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.977 [2024-11-26 15:29:10.313854] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:11.977 [2024-11-26 15:29:10.314401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.238 [2024-11-26 15:29:10.451722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:12.238 [2024-11-26 15:29:10.487130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.238 [2024-11-26 15:29:10.513112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.238 [2024-11-26 15:29:10.556275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.238 [2024-11-26 15:29:10.556307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 [2024-11-26 15:29:11.146926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.808 [2024-11-26 15:29:11.146976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.808 [2024-11-26 15:29:11.146996] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.808 [2024-11-26 15:29:11.147004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.808 [2024-11-26 15:29:11.147016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.808 [2024-11-26 15:29:11.147023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.808 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.808 "name": "Existed_Raid", 00:13:12.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.808 "strip_size_kb": 64, 00:13:12.808 "state": "configuring", 00:13:12.808 "raid_level": "raid5f", 00:13:12.808 "superblock": false, 00:13:12.808 "num_base_bdevs": 3, 00:13:12.808 "num_base_bdevs_discovered": 0, 00:13:12.808 "num_base_bdevs_operational": 3, 00:13:12.808 "base_bdevs_list": [ 00:13:12.808 { 00:13:12.808 "name": "BaseBdev1", 00:13:12.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.808 "is_configured": false, 00:13:12.808 "data_offset": 0, 00:13:12.808 "data_size": 0 00:13:12.809 }, 00:13:12.809 { 00:13:12.809 "name": "BaseBdev2", 00:13:12.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.809 "is_configured": false, 00:13:12.809 "data_offset": 0, 00:13:12.809 "data_size": 0 00:13:12.809 }, 00:13:12.809 { 00:13:12.809 "name": "BaseBdev3", 00:13:12.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.809 "is_configured": false, 00:13:12.809 "data_offset": 0, 00:13:12.809 "data_size": 0 00:13:12.809 } 00:13:12.809 ] 00:13:12.809 }' 00:13:12.809 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.809 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.418 [2024-11-26 15:29:11.546955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.418 [2024-11-26 15:29:11.547033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.418 [2024-11-26 15:29:11.554965] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.418 [2024-11-26 15:29:11.555055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.418 [2024-11-26 15:29:11.555085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.418 [2024-11-26 15:29:11.555105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.418 [2024-11-26 15:29:11.555124] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.418 [2024-11-26 15:29:11.555145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.418 [2024-11-26 15:29:11.571734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.418 BaseBdev1 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.418 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.418 [ 00:13:13.418 { 00:13:13.418 "name": "BaseBdev1", 00:13:13.418 "aliases": [ 00:13:13.418 "13d197dc-862c-4265-936c-4a8ca7918b65" 00:13:13.418 ], 00:13:13.418 "product_name": "Malloc disk", 00:13:13.418 "block_size": 512, 00:13:13.418 "num_blocks": 65536, 00:13:13.418 "uuid": "13d197dc-862c-4265-936c-4a8ca7918b65", 00:13:13.418 "assigned_rate_limits": { 00:13:13.418 "rw_ios_per_sec": 0, 00:13:13.418 "rw_mbytes_per_sec": 0, 00:13:13.418 "r_mbytes_per_sec": 0, 00:13:13.418 "w_mbytes_per_sec": 0 00:13:13.418 }, 00:13:13.418 "claimed": true, 00:13:13.418 "claim_type": "exclusive_write", 00:13:13.418 "zoned": false, 00:13:13.418 "supported_io_types": { 00:13:13.418 "read": true, 00:13:13.418 "write": true, 00:13:13.418 "unmap": true, 00:13:13.418 "flush": true, 00:13:13.418 "reset": true, 00:13:13.418 "nvme_admin": false, 00:13:13.418 "nvme_io": false, 00:13:13.418 "nvme_io_md": false, 00:13:13.418 "write_zeroes": true, 00:13:13.418 "zcopy": true, 00:13:13.418 "get_zone_info": false, 00:13:13.418 "zone_management": false, 00:13:13.418 "zone_append": false, 00:13:13.418 "compare": false, 00:13:13.419 "compare_and_write": false, 00:13:13.419 "abort": true, 00:13:13.419 "seek_hole": false, 00:13:13.419 "seek_data": false, 00:13:13.419 "copy": true, 00:13:13.419 "nvme_iov_md": false 00:13:13.419 }, 00:13:13.419 "memory_domains": [ 00:13:13.419 { 00:13:13.419 "dma_device_id": "system", 00:13:13.419 "dma_device_type": 1 00:13:13.419 }, 00:13:13.419 { 00:13:13.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.419 "dma_device_type": 2 00:13:13.419 } 00:13:13.419 ], 00:13:13.419 "driver_specific": {} 00:13:13.419 } 00:13:13.419 ] 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.419 "name": "Existed_Raid", 00:13:13.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.419 "strip_size_kb": 64, 00:13:13.419 "state": "configuring", 00:13:13.419 "raid_level": "raid5f", 00:13:13.419 "superblock": false, 00:13:13.419 "num_base_bdevs": 3, 00:13:13.419 "num_base_bdevs_discovered": 1, 00:13:13.419 "num_base_bdevs_operational": 3, 00:13:13.419 "base_bdevs_list": [ 00:13:13.419 { 00:13:13.419 "name": "BaseBdev1", 00:13:13.419 "uuid": "13d197dc-862c-4265-936c-4a8ca7918b65", 00:13:13.419 "is_configured": true, 00:13:13.419 "data_offset": 0, 00:13:13.419 "data_size": 65536 00:13:13.419 }, 00:13:13.419 { 00:13:13.419 "name": "BaseBdev2", 00:13:13.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.419 "is_configured": false, 00:13:13.419 "data_offset": 0, 00:13:13.419 "data_size": 0 00:13:13.419 }, 00:13:13.419 { 00:13:13.419 "name": "BaseBdev3", 00:13:13.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.419 "is_configured": false, 00:13:13.419 "data_offset": 0, 00:13:13.419 "data_size": 0 00:13:13.419 } 00:13:13.419 ] 00:13:13.419 }' 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.419 15:29:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.680 [2024-11-26 15:29:12.043884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.680 [2024-11-26 15:29:12.043941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.680 [2024-11-26 15:29:12.055924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.680 [2024-11-26 15:29:12.057719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.680 [2024-11-26 15:29:12.057759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.680 [2024-11-26 15:29:12.057772] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.680 [2024-11-26 15:29:12.057779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:13.680 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.681 "name": "Existed_Raid", 00:13:13.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.681 "strip_size_kb": 64, 00:13:13.681 "state": "configuring", 00:13:13.681 "raid_level": "raid5f", 00:13:13.681 "superblock": false, 00:13:13.681 "num_base_bdevs": 3, 00:13:13.681 "num_base_bdevs_discovered": 1, 00:13:13.681 "num_base_bdevs_operational": 3, 00:13:13.681 "base_bdevs_list": [ 00:13:13.681 { 00:13:13.681 "name": "BaseBdev1", 00:13:13.681 "uuid": "13d197dc-862c-4265-936c-4a8ca7918b65", 00:13:13.681 "is_configured": true, 00:13:13.681 "data_offset": 0, 00:13:13.681 "data_size": 65536 00:13:13.681 }, 00:13:13.681 { 00:13:13.681 "name": "BaseBdev2", 00:13:13.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.681 "is_configured": false, 00:13:13.681 "data_offset": 0, 00:13:13.681 "data_size": 0 00:13:13.681 }, 00:13:13.681 { 00:13:13.681 "name": "BaseBdev3", 00:13:13.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.681 "is_configured": false, 00:13:13.681 "data_offset": 0, 00:13:13.681 "data_size": 0 00:13:13.681 } 00:13:13.681 ] 00:13:13.681 }' 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.681 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 [2024-11-26 15:29:12.487106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.252 BaseBdev2 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.252 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 [ 00:13:14.252 { 00:13:14.252 "name": "BaseBdev2", 00:13:14.252 "aliases": [ 00:13:14.252 "f2134616-db50-4adc-aafc-dd1b65dd9b6f" 00:13:14.252 ], 00:13:14.252 "product_name": "Malloc disk", 00:13:14.252 "block_size": 512, 00:13:14.252 "num_blocks": 65536, 00:13:14.252 "uuid": "f2134616-db50-4adc-aafc-dd1b65dd9b6f", 00:13:14.252 "assigned_rate_limits": { 00:13:14.252 "rw_ios_per_sec": 0, 00:13:14.253 "rw_mbytes_per_sec": 0, 00:13:14.253 "r_mbytes_per_sec": 0, 00:13:14.253 "w_mbytes_per_sec": 0 00:13:14.253 }, 00:13:14.253 "claimed": true, 00:13:14.253 "claim_type": "exclusive_write", 00:13:14.253 "zoned": false, 00:13:14.253 "supported_io_types": { 00:13:14.253 "read": true, 00:13:14.253 "write": true, 00:13:14.253 "unmap": true, 00:13:14.253 "flush": true, 00:13:14.253 "reset": true, 00:13:14.253 "nvme_admin": false, 00:13:14.253 "nvme_io": false, 00:13:14.253 "nvme_io_md": false, 00:13:14.253 "write_zeroes": true, 00:13:14.253 "zcopy": true, 00:13:14.253 "get_zone_info": false, 00:13:14.253 "zone_management": false, 00:13:14.253 "zone_append": false, 00:13:14.253 "compare": false, 00:13:14.253 "compare_and_write": false, 00:13:14.253 "abort": true, 00:13:14.253 "seek_hole": false, 00:13:14.253 "seek_data": false, 00:13:14.253 "copy": true, 00:13:14.253 "nvme_iov_md": false 00:13:14.253 }, 00:13:14.253 "memory_domains": [ 00:13:14.253 { 00:13:14.253 "dma_device_id": "system", 00:13:14.253 "dma_device_type": 1 00:13:14.253 }, 00:13:14.253 { 00:13:14.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.253 "dma_device_type": 2 00:13:14.253 } 00:13:14.253 ], 00:13:14.253 "driver_specific": {} 00:13:14.253 } 00:13:14.253 ] 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.253 "name": "Existed_Raid", 00:13:14.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.253 "strip_size_kb": 64, 00:13:14.253 "state": "configuring", 00:13:14.253 "raid_level": "raid5f", 00:13:14.253 "superblock": false, 00:13:14.253 "num_base_bdevs": 3, 00:13:14.253 "num_base_bdevs_discovered": 2, 00:13:14.253 "num_base_bdevs_operational": 3, 00:13:14.253 "base_bdevs_list": [ 00:13:14.253 { 00:13:14.253 "name": "BaseBdev1", 00:13:14.253 "uuid": "13d197dc-862c-4265-936c-4a8ca7918b65", 00:13:14.253 "is_configured": true, 00:13:14.253 "data_offset": 0, 00:13:14.253 "data_size": 65536 00:13:14.253 }, 00:13:14.253 { 00:13:14.253 "name": "BaseBdev2", 00:13:14.253 "uuid": "f2134616-db50-4adc-aafc-dd1b65dd9b6f", 00:13:14.253 "is_configured": true, 00:13:14.253 "data_offset": 0, 00:13:14.253 "data_size": 65536 00:13:14.253 }, 00:13:14.253 { 00:13:14.253 "name": "BaseBdev3", 00:13:14.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.253 "is_configured": false, 00:13:14.253 "data_offset": 0, 00:13:14.253 "data_size": 0 00:13:14.253 } 00:13:14.253 ] 00:13:14.253 }' 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.253 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 [2024-11-26 15:29:12.913037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.513 [2024-11-26 15:29:12.913424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:14.513 [2024-11-26 15:29:12.913507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:14.513 [2024-11-26 15:29:12.914720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:14.513 [2024-11-26 15:29:12.916521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:14.513 [2024-11-26 15:29:12.916734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:13:14.513 [2024-11-26 15:29:12.917564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.513 BaseBdev3 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.513 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 [ 00:13:14.513 { 00:13:14.513 "name": "BaseBdev3", 00:13:14.513 "aliases": [ 00:13:14.513 "568afaa1-e2e7-4f2d-a844-54d148bec7d9" 00:13:14.513 ], 00:13:14.514 "product_name": "Malloc disk", 00:13:14.514 "block_size": 512, 00:13:14.514 "num_blocks": 65536, 00:13:14.514 "uuid": "568afaa1-e2e7-4f2d-a844-54d148bec7d9", 00:13:14.514 "assigned_rate_limits": { 00:13:14.514 "rw_ios_per_sec": 0, 00:13:14.514 "rw_mbytes_per_sec": 0, 00:13:14.514 "r_mbytes_per_sec": 0, 00:13:14.514 "w_mbytes_per_sec": 0 00:13:14.514 }, 00:13:14.514 "claimed": true, 00:13:14.514 "claim_type": "exclusive_write", 00:13:14.514 "zoned": false, 00:13:14.514 "supported_io_types": { 00:13:14.514 "read": true, 00:13:14.514 "write": true, 00:13:14.514 "unmap": true, 00:13:14.514 "flush": true, 00:13:14.514 "reset": true, 00:13:14.514 "nvme_admin": false, 00:13:14.514 "nvme_io": false, 00:13:14.514 "nvme_io_md": false, 00:13:14.514 "write_zeroes": true, 00:13:14.514 "zcopy": true, 00:13:14.514 "get_zone_info": false, 00:13:14.514 "zone_management": false, 00:13:14.514 "zone_append": false, 00:13:14.514 "compare": false, 00:13:14.514 "compare_and_write": false, 00:13:14.514 "abort": true, 00:13:14.514 "seek_hole": false, 00:13:14.514 "seek_data": false, 00:13:14.514 "copy": true, 00:13:14.514 "nvme_iov_md": false 00:13:14.514 }, 00:13:14.514 "memory_domains": [ 00:13:14.514 { 00:13:14.514 "dma_device_id": "system", 00:13:14.514 "dma_device_type": 1 00:13:14.514 }, 00:13:14.514 { 00:13:14.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.514 "dma_device_type": 2 00:13:14.514 } 00:13:14.514 ], 00:13:14.514 "driver_specific": {} 00:13:14.514 } 00:13:14.514 ] 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.514 15:29:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.774 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.774 "name": "Existed_Raid", 00:13:14.774 "uuid": "d1e82ae6-a61c-4126-850e-95a9d9aaec3f", 00:13:14.774 "strip_size_kb": 64, 00:13:14.774 "state": "online", 00:13:14.774 "raid_level": "raid5f", 00:13:14.774 "superblock": false, 00:13:14.774 "num_base_bdevs": 3, 00:13:14.774 "num_base_bdevs_discovered": 3, 00:13:14.774 "num_base_bdevs_operational": 3, 00:13:14.774 "base_bdevs_list": [ 00:13:14.774 { 00:13:14.774 "name": "BaseBdev1", 00:13:14.774 "uuid": "13d197dc-862c-4265-936c-4a8ca7918b65", 00:13:14.774 "is_configured": true, 00:13:14.774 "data_offset": 0, 00:13:14.774 "data_size": 65536 00:13:14.774 }, 00:13:14.774 { 00:13:14.774 "name": "BaseBdev2", 00:13:14.774 "uuid": "f2134616-db50-4adc-aafc-dd1b65dd9b6f", 00:13:14.774 "is_configured": true, 00:13:14.774 "data_offset": 0, 00:13:14.774 "data_size": 65536 00:13:14.774 }, 00:13:14.774 { 00:13:14.774 "name": "BaseBdev3", 00:13:14.774 "uuid": "568afaa1-e2e7-4f2d-a844-54d148bec7d9", 00:13:14.774 "is_configured": true, 00:13:14.774 "data_offset": 0, 00:13:14.774 "data_size": 65536 00:13:14.774 } 00:13:14.774 ] 00:13:14.774 }' 00:13:14.774 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.774 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.033 [2024-11-26 15:29:13.429596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.033 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:15.033 "name": "Existed_Raid", 00:13:15.033 "aliases": [ 00:13:15.033 "d1e82ae6-a61c-4126-850e-95a9d9aaec3f" 00:13:15.033 ], 00:13:15.033 "product_name": "Raid Volume", 00:13:15.033 "block_size": 512, 00:13:15.033 "num_blocks": 131072, 00:13:15.033 "uuid": "d1e82ae6-a61c-4126-850e-95a9d9aaec3f", 00:13:15.033 "assigned_rate_limits": { 00:13:15.033 "rw_ios_per_sec": 0, 00:13:15.033 "rw_mbytes_per_sec": 0, 00:13:15.033 "r_mbytes_per_sec": 0, 00:13:15.033 "w_mbytes_per_sec": 0 00:13:15.033 }, 00:13:15.033 "claimed": false, 00:13:15.033 "zoned": false, 00:13:15.033 "supported_io_types": { 00:13:15.033 "read": true, 00:13:15.033 "write": true, 00:13:15.033 "unmap": false, 00:13:15.033 "flush": false, 00:13:15.033 "reset": true, 00:13:15.033 "nvme_admin": false, 00:13:15.033 "nvme_io": false, 00:13:15.033 "nvme_io_md": false, 00:13:15.033 "write_zeroes": true, 00:13:15.033 "zcopy": false, 00:13:15.033 "get_zone_info": false, 00:13:15.033 "zone_management": false, 00:13:15.033 "zone_append": false, 00:13:15.033 "compare": false, 00:13:15.033 "compare_and_write": false, 00:13:15.033 "abort": false, 00:13:15.033 "seek_hole": false, 00:13:15.034 "seek_data": false, 00:13:15.034 "copy": false, 00:13:15.034 "nvme_iov_md": false 00:13:15.034 }, 00:13:15.034 "driver_specific": { 00:13:15.034 "raid": { 00:13:15.034 "uuid": "d1e82ae6-a61c-4126-850e-95a9d9aaec3f", 00:13:15.034 "strip_size_kb": 64, 00:13:15.034 "state": "online", 00:13:15.034 "raid_level": "raid5f", 00:13:15.034 "superblock": false, 00:13:15.034 "num_base_bdevs": 3, 00:13:15.034 "num_base_bdevs_discovered": 3, 00:13:15.034 "num_base_bdevs_operational": 3, 00:13:15.034 "base_bdevs_list": [ 00:13:15.034 { 00:13:15.034 "name": "BaseBdev1", 00:13:15.034 "uuid": "13d197dc-862c-4265-936c-4a8ca7918b65", 00:13:15.034 "is_configured": true, 00:13:15.034 "data_offset": 0, 00:13:15.034 "data_size": 65536 00:13:15.034 }, 00:13:15.034 { 00:13:15.034 "name": "BaseBdev2", 00:13:15.034 "uuid": "f2134616-db50-4adc-aafc-dd1b65dd9b6f", 00:13:15.034 "is_configured": true, 00:13:15.034 "data_offset": 0, 00:13:15.034 "data_size": 65536 00:13:15.034 }, 00:13:15.034 { 00:13:15.034 "name": "BaseBdev3", 00:13:15.034 "uuid": "568afaa1-e2e7-4f2d-a844-54d148bec7d9", 00:13:15.034 "is_configured": true, 00:13:15.034 "data_offset": 0, 00:13:15.034 "data_size": 65536 00:13:15.034 } 00:13:15.034 ] 00:13:15.034 } 00:13:15.034 } 00:13:15.034 }' 00:13:15.034 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:15.034 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:15.034 BaseBdev2 00:13:15.034 BaseBdev3' 00:13:15.034 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.293 [2024-11-26 15:29:13.709535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.293 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.553 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.553 "name": "Existed_Raid", 00:13:15.553 "uuid": "d1e82ae6-a61c-4126-850e-95a9d9aaec3f", 00:13:15.553 "strip_size_kb": 64, 00:13:15.553 "state": "online", 00:13:15.553 "raid_level": "raid5f", 00:13:15.553 "superblock": false, 00:13:15.553 "num_base_bdevs": 3, 00:13:15.553 "num_base_bdevs_discovered": 2, 00:13:15.553 "num_base_bdevs_operational": 2, 00:13:15.553 "base_bdevs_list": [ 00:13:15.553 { 00:13:15.553 "name": null, 00:13:15.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.553 "is_configured": false, 00:13:15.553 "data_offset": 0, 00:13:15.553 "data_size": 65536 00:13:15.553 }, 00:13:15.553 { 00:13:15.553 "name": "BaseBdev2", 00:13:15.553 "uuid": "f2134616-db50-4adc-aafc-dd1b65dd9b6f", 00:13:15.554 "is_configured": true, 00:13:15.554 "data_offset": 0, 00:13:15.554 "data_size": 65536 00:13:15.554 }, 00:13:15.554 { 00:13:15.554 "name": "BaseBdev3", 00:13:15.554 "uuid": "568afaa1-e2e7-4f2d-a844-54d148bec7d9", 00:13:15.554 "is_configured": true, 00:13:15.554 "data_offset": 0, 00:13:15.554 "data_size": 65536 00:13:15.554 } 00:13:15.554 ] 00:13:15.554 }' 00:13:15.554 15:29:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.554 15:29:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.814 [2024-11-26 15:29:14.156912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:15.814 [2024-11-26 15:29:14.157000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.814 [2024-11-26 15:29:14.168210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.814 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.815 [2024-11-26 15:29:14.224288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.815 [2024-11-26 15:29:14.224379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:15.815 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:16.075 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:16.075 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 BaseBdev2 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 [ 00:13:16.076 { 00:13:16.076 "name": "BaseBdev2", 00:13:16.076 "aliases": [ 00:13:16.076 "c25b1fe2-6677-4839-9d4b-5824ae2b83ae" 00:13:16.076 ], 00:13:16.076 "product_name": "Malloc disk", 00:13:16.076 "block_size": 512, 00:13:16.076 "num_blocks": 65536, 00:13:16.076 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:16.076 "assigned_rate_limits": { 00:13:16.076 "rw_ios_per_sec": 0, 00:13:16.076 "rw_mbytes_per_sec": 0, 00:13:16.076 "r_mbytes_per_sec": 0, 00:13:16.076 "w_mbytes_per_sec": 0 00:13:16.076 }, 00:13:16.076 "claimed": false, 00:13:16.076 "zoned": false, 00:13:16.076 "supported_io_types": { 00:13:16.076 "read": true, 00:13:16.076 "write": true, 00:13:16.076 "unmap": true, 00:13:16.076 "flush": true, 00:13:16.076 "reset": true, 00:13:16.076 "nvme_admin": false, 00:13:16.076 "nvme_io": false, 00:13:16.076 "nvme_io_md": false, 00:13:16.076 "write_zeroes": true, 00:13:16.076 "zcopy": true, 00:13:16.076 "get_zone_info": false, 00:13:16.076 "zone_management": false, 00:13:16.076 "zone_append": false, 00:13:16.076 "compare": false, 00:13:16.076 "compare_and_write": false, 00:13:16.076 "abort": true, 00:13:16.076 "seek_hole": false, 00:13:16.076 "seek_data": false, 00:13:16.076 "copy": true, 00:13:16.076 "nvme_iov_md": false 00:13:16.076 }, 00:13:16.076 "memory_domains": [ 00:13:16.076 { 00:13:16.076 "dma_device_id": "system", 00:13:16.076 "dma_device_type": 1 00:13:16.076 }, 00:13:16.076 { 00:13:16.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.076 "dma_device_type": 2 00:13:16.076 } 00:13:16.076 ], 00:13:16.076 "driver_specific": {} 00:13:16.076 } 00:13:16.076 ] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 BaseBdev3 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 [ 00:13:16.076 { 00:13:16.076 "name": "BaseBdev3", 00:13:16.076 "aliases": [ 00:13:16.076 "40ae4063-dcb1-4e9d-94f4-c05a55343660" 00:13:16.076 ], 00:13:16.076 "product_name": "Malloc disk", 00:13:16.076 "block_size": 512, 00:13:16.076 "num_blocks": 65536, 00:13:16.076 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:16.076 "assigned_rate_limits": { 00:13:16.076 "rw_ios_per_sec": 0, 00:13:16.076 "rw_mbytes_per_sec": 0, 00:13:16.076 "r_mbytes_per_sec": 0, 00:13:16.076 "w_mbytes_per_sec": 0 00:13:16.076 }, 00:13:16.076 "claimed": false, 00:13:16.076 "zoned": false, 00:13:16.076 "supported_io_types": { 00:13:16.076 "read": true, 00:13:16.076 "write": true, 00:13:16.076 "unmap": true, 00:13:16.076 "flush": true, 00:13:16.076 "reset": true, 00:13:16.076 "nvme_admin": false, 00:13:16.076 "nvme_io": false, 00:13:16.076 "nvme_io_md": false, 00:13:16.076 "write_zeroes": true, 00:13:16.076 "zcopy": true, 00:13:16.076 "get_zone_info": false, 00:13:16.076 "zone_management": false, 00:13:16.076 "zone_append": false, 00:13:16.076 "compare": false, 00:13:16.076 "compare_and_write": false, 00:13:16.076 "abort": true, 00:13:16.076 "seek_hole": false, 00:13:16.076 "seek_data": false, 00:13:16.076 "copy": true, 00:13:16.076 "nvme_iov_md": false 00:13:16.076 }, 00:13:16.076 "memory_domains": [ 00:13:16.076 { 00:13:16.076 "dma_device_id": "system", 00:13:16.076 "dma_device_type": 1 00:13:16.076 }, 00:13:16.076 { 00:13:16.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.076 "dma_device_type": 2 00:13:16.076 } 00:13:16.076 ], 00:13:16.076 "driver_specific": {} 00:13:16.076 } 00:13:16.076 ] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.076 [2024-11-26 15:29:14.399242] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.076 [2024-11-26 15:29:14.399328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.076 [2024-11-26 15:29:14.399387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.076 [2024-11-26 15:29:14.401166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.076 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.077 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.077 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.077 "name": "Existed_Raid", 00:13:16.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.077 "strip_size_kb": 64, 00:13:16.077 "state": "configuring", 00:13:16.077 "raid_level": "raid5f", 00:13:16.077 "superblock": false, 00:13:16.077 "num_base_bdevs": 3, 00:13:16.077 "num_base_bdevs_discovered": 2, 00:13:16.077 "num_base_bdevs_operational": 3, 00:13:16.077 "base_bdevs_list": [ 00:13:16.077 { 00:13:16.077 "name": "BaseBdev1", 00:13:16.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.077 "is_configured": false, 00:13:16.077 "data_offset": 0, 00:13:16.077 "data_size": 0 00:13:16.077 }, 00:13:16.077 { 00:13:16.077 "name": "BaseBdev2", 00:13:16.077 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:16.077 "is_configured": true, 00:13:16.077 "data_offset": 0, 00:13:16.077 "data_size": 65536 00:13:16.077 }, 00:13:16.077 { 00:13:16.077 "name": "BaseBdev3", 00:13:16.077 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:16.077 "is_configured": true, 00:13:16.077 "data_offset": 0, 00:13:16.077 "data_size": 65536 00:13:16.077 } 00:13:16.077 ] 00:13:16.077 }' 00:13:16.077 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.077 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.647 [2024-11-26 15:29:14.819383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.647 "name": "Existed_Raid", 00:13:16.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.647 "strip_size_kb": 64, 00:13:16.647 "state": "configuring", 00:13:16.647 "raid_level": "raid5f", 00:13:16.647 "superblock": false, 00:13:16.647 "num_base_bdevs": 3, 00:13:16.647 "num_base_bdevs_discovered": 1, 00:13:16.647 "num_base_bdevs_operational": 3, 00:13:16.647 "base_bdevs_list": [ 00:13:16.647 { 00:13:16.647 "name": "BaseBdev1", 00:13:16.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.647 "is_configured": false, 00:13:16.647 "data_offset": 0, 00:13:16.647 "data_size": 0 00:13:16.647 }, 00:13:16.647 { 00:13:16.647 "name": null, 00:13:16.647 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:16.647 "is_configured": false, 00:13:16.647 "data_offset": 0, 00:13:16.647 "data_size": 65536 00:13:16.647 }, 00:13:16.647 { 00:13:16.647 "name": "BaseBdev3", 00:13:16.647 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:16.647 "is_configured": true, 00:13:16.647 "data_offset": 0, 00:13:16.647 "data_size": 65536 00:13:16.647 } 00:13:16.647 ] 00:13:16.647 }' 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.647 15:29:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.908 [2024-11-26 15:29:15.266320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.908 BaseBdev1 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.908 [ 00:13:16.908 { 00:13:16.908 "name": "BaseBdev1", 00:13:16.908 "aliases": [ 00:13:16.908 "c1a138cb-af3e-48c9-a0ab-bbc8344353f8" 00:13:16.908 ], 00:13:16.908 "product_name": "Malloc disk", 00:13:16.908 "block_size": 512, 00:13:16.908 "num_blocks": 65536, 00:13:16.908 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:16.908 "assigned_rate_limits": { 00:13:16.908 "rw_ios_per_sec": 0, 00:13:16.908 "rw_mbytes_per_sec": 0, 00:13:16.908 "r_mbytes_per_sec": 0, 00:13:16.908 "w_mbytes_per_sec": 0 00:13:16.908 }, 00:13:16.908 "claimed": true, 00:13:16.908 "claim_type": "exclusive_write", 00:13:16.908 "zoned": false, 00:13:16.908 "supported_io_types": { 00:13:16.908 "read": true, 00:13:16.908 "write": true, 00:13:16.908 "unmap": true, 00:13:16.908 "flush": true, 00:13:16.908 "reset": true, 00:13:16.908 "nvme_admin": false, 00:13:16.908 "nvme_io": false, 00:13:16.908 "nvme_io_md": false, 00:13:16.908 "write_zeroes": true, 00:13:16.908 "zcopy": true, 00:13:16.908 "get_zone_info": false, 00:13:16.908 "zone_management": false, 00:13:16.908 "zone_append": false, 00:13:16.908 "compare": false, 00:13:16.908 "compare_and_write": false, 00:13:16.908 "abort": true, 00:13:16.908 "seek_hole": false, 00:13:16.908 "seek_data": false, 00:13:16.908 "copy": true, 00:13:16.908 "nvme_iov_md": false 00:13:16.908 }, 00:13:16.908 "memory_domains": [ 00:13:16.908 { 00:13:16.908 "dma_device_id": "system", 00:13:16.908 "dma_device_type": 1 00:13:16.908 }, 00:13:16.908 { 00:13:16.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.908 "dma_device_type": 2 00:13:16.908 } 00:13:16.908 ], 00:13:16.908 "driver_specific": {} 00:13:16.908 } 00:13:16.908 ] 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.908 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.909 "name": "Existed_Raid", 00:13:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.909 "strip_size_kb": 64, 00:13:16.909 "state": "configuring", 00:13:16.909 "raid_level": "raid5f", 00:13:16.909 "superblock": false, 00:13:16.909 "num_base_bdevs": 3, 00:13:16.909 "num_base_bdevs_discovered": 2, 00:13:16.909 "num_base_bdevs_operational": 3, 00:13:16.909 "base_bdevs_list": [ 00:13:16.909 { 00:13:16.909 "name": "BaseBdev1", 00:13:16.909 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:16.909 "is_configured": true, 00:13:16.909 "data_offset": 0, 00:13:16.909 "data_size": 65536 00:13:16.909 }, 00:13:16.909 { 00:13:16.909 "name": null, 00:13:16.909 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:16.909 "is_configured": false, 00:13:16.909 "data_offset": 0, 00:13:16.909 "data_size": 65536 00:13:16.909 }, 00:13:16.909 { 00:13:16.909 "name": "BaseBdev3", 00:13:16.909 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:16.909 "is_configured": true, 00:13:16.909 "data_offset": 0, 00:13:16.909 "data_size": 65536 00:13:16.909 } 00:13:16.909 ] 00:13:16.909 }' 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.909 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.480 [2024-11-26 15:29:15.702481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.480 "name": "Existed_Raid", 00:13:17.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.480 "strip_size_kb": 64, 00:13:17.480 "state": "configuring", 00:13:17.480 "raid_level": "raid5f", 00:13:17.480 "superblock": false, 00:13:17.480 "num_base_bdevs": 3, 00:13:17.480 "num_base_bdevs_discovered": 1, 00:13:17.480 "num_base_bdevs_operational": 3, 00:13:17.480 "base_bdevs_list": [ 00:13:17.480 { 00:13:17.480 "name": "BaseBdev1", 00:13:17.480 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:17.480 "is_configured": true, 00:13:17.480 "data_offset": 0, 00:13:17.480 "data_size": 65536 00:13:17.480 }, 00:13:17.480 { 00:13:17.480 "name": null, 00:13:17.480 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:17.480 "is_configured": false, 00:13:17.480 "data_offset": 0, 00:13:17.480 "data_size": 65536 00:13:17.480 }, 00:13:17.480 { 00:13:17.480 "name": null, 00:13:17.480 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:17.480 "is_configured": false, 00:13:17.480 "data_offset": 0, 00:13:17.480 "data_size": 65536 00:13:17.480 } 00:13:17.480 ] 00:13:17.480 }' 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.480 15:29:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.741 [2024-11-26 15:29:16.186644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.741 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.002 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.002 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.002 "name": "Existed_Raid", 00:13:18.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.002 "strip_size_kb": 64, 00:13:18.002 "state": "configuring", 00:13:18.002 "raid_level": "raid5f", 00:13:18.002 "superblock": false, 00:13:18.002 "num_base_bdevs": 3, 00:13:18.002 "num_base_bdevs_discovered": 2, 00:13:18.002 "num_base_bdevs_operational": 3, 00:13:18.002 "base_bdevs_list": [ 00:13:18.002 { 00:13:18.002 "name": "BaseBdev1", 00:13:18.002 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:18.002 "is_configured": true, 00:13:18.002 "data_offset": 0, 00:13:18.002 "data_size": 65536 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "name": null, 00:13:18.002 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:18.002 "is_configured": false, 00:13:18.002 "data_offset": 0, 00:13:18.002 "data_size": 65536 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "name": "BaseBdev3", 00:13:18.002 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:18.002 "is_configured": true, 00:13:18.002 "data_offset": 0, 00:13:18.002 "data_size": 65536 00:13:18.002 } 00:13:18.002 ] 00:13:18.002 }' 00:13:18.002 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.002 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.262 [2024-11-26 15:29:16.666785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.262 "name": "Existed_Raid", 00:13:18.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.262 "strip_size_kb": 64, 00:13:18.262 "state": "configuring", 00:13:18.262 "raid_level": "raid5f", 00:13:18.262 "superblock": false, 00:13:18.262 "num_base_bdevs": 3, 00:13:18.262 "num_base_bdevs_discovered": 1, 00:13:18.262 "num_base_bdevs_operational": 3, 00:13:18.262 "base_bdevs_list": [ 00:13:18.262 { 00:13:18.262 "name": null, 00:13:18.262 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:18.262 "is_configured": false, 00:13:18.262 "data_offset": 0, 00:13:18.262 "data_size": 65536 00:13:18.262 }, 00:13:18.262 { 00:13:18.262 "name": null, 00:13:18.262 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:18.262 "is_configured": false, 00:13:18.262 "data_offset": 0, 00:13:18.262 "data_size": 65536 00:13:18.262 }, 00:13:18.262 { 00:13:18.262 "name": "BaseBdev3", 00:13:18.262 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:18.262 "is_configured": true, 00:13:18.262 "data_offset": 0, 00:13:18.262 "data_size": 65536 00:13:18.262 } 00:13:18.262 ] 00:13:18.262 }' 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.262 15:29:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.832 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.832 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:18.832 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.832 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.832 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.833 [2024-11-26 15:29:17.181420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.833 "name": "Existed_Raid", 00:13:18.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.833 "strip_size_kb": 64, 00:13:18.833 "state": "configuring", 00:13:18.833 "raid_level": "raid5f", 00:13:18.833 "superblock": false, 00:13:18.833 "num_base_bdevs": 3, 00:13:18.833 "num_base_bdevs_discovered": 2, 00:13:18.833 "num_base_bdevs_operational": 3, 00:13:18.833 "base_bdevs_list": [ 00:13:18.833 { 00:13:18.833 "name": null, 00:13:18.833 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:18.833 "is_configured": false, 00:13:18.833 "data_offset": 0, 00:13:18.833 "data_size": 65536 00:13:18.833 }, 00:13:18.833 { 00:13:18.833 "name": "BaseBdev2", 00:13:18.833 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:18.833 "is_configured": true, 00:13:18.833 "data_offset": 0, 00:13:18.833 "data_size": 65536 00:13:18.833 }, 00:13:18.833 { 00:13:18.833 "name": "BaseBdev3", 00:13:18.833 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:18.833 "is_configured": true, 00:13:18.833 "data_offset": 0, 00:13:18.833 "data_size": 65536 00:13:18.833 } 00:13:18.833 ] 00:13:18.833 }' 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.833 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1a138cb-af3e-48c9-a0ab-bbc8344353f8 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.404 [2024-11-26 15:29:17.708633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:19.404 [2024-11-26 15:29:17.708697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:19.404 [2024-11-26 15:29:17.708705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:19.404 [2024-11-26 15:29:17.708951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:19.404 [2024-11-26 15:29:17.709390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:19.404 [2024-11-26 15:29:17.709407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:19.404 [2024-11-26 15:29:17.709577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.404 NewBaseBdev 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.404 [ 00:13:19.404 { 00:13:19.404 "name": "NewBaseBdev", 00:13:19.404 "aliases": [ 00:13:19.404 "c1a138cb-af3e-48c9-a0ab-bbc8344353f8" 00:13:19.404 ], 00:13:19.404 "product_name": "Malloc disk", 00:13:19.404 "block_size": 512, 00:13:19.404 "num_blocks": 65536, 00:13:19.404 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:19.404 "assigned_rate_limits": { 00:13:19.404 "rw_ios_per_sec": 0, 00:13:19.404 "rw_mbytes_per_sec": 0, 00:13:19.404 "r_mbytes_per_sec": 0, 00:13:19.404 "w_mbytes_per_sec": 0 00:13:19.404 }, 00:13:19.404 "claimed": true, 00:13:19.404 "claim_type": "exclusive_write", 00:13:19.404 "zoned": false, 00:13:19.404 "supported_io_types": { 00:13:19.404 "read": true, 00:13:19.404 "write": true, 00:13:19.404 "unmap": true, 00:13:19.404 "flush": true, 00:13:19.404 "reset": true, 00:13:19.404 "nvme_admin": false, 00:13:19.404 "nvme_io": false, 00:13:19.404 "nvme_io_md": false, 00:13:19.404 "write_zeroes": true, 00:13:19.404 "zcopy": true, 00:13:19.404 "get_zone_info": false, 00:13:19.404 "zone_management": false, 00:13:19.404 "zone_append": false, 00:13:19.404 "compare": false, 00:13:19.404 "compare_and_write": false, 00:13:19.404 "abort": true, 00:13:19.404 "seek_hole": false, 00:13:19.404 "seek_data": false, 00:13:19.404 "copy": true, 00:13:19.404 "nvme_iov_md": false 00:13:19.404 }, 00:13:19.404 "memory_domains": [ 00:13:19.404 { 00:13:19.404 "dma_device_id": "system", 00:13:19.404 "dma_device_type": 1 00:13:19.404 }, 00:13:19.404 { 00:13:19.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.404 "dma_device_type": 2 00:13:19.404 } 00:13:19.404 ], 00:13:19.404 "driver_specific": {} 00:13:19.404 } 00:13:19.404 ] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.404 "name": "Existed_Raid", 00:13:19.404 "uuid": "fb3b0b03-0c21-4bbc-ab5b-2af72ef0b580", 00:13:19.404 "strip_size_kb": 64, 00:13:19.404 "state": "online", 00:13:19.404 "raid_level": "raid5f", 00:13:19.404 "superblock": false, 00:13:19.404 "num_base_bdevs": 3, 00:13:19.404 "num_base_bdevs_discovered": 3, 00:13:19.404 "num_base_bdevs_operational": 3, 00:13:19.404 "base_bdevs_list": [ 00:13:19.404 { 00:13:19.404 "name": "NewBaseBdev", 00:13:19.404 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:19.404 "is_configured": true, 00:13:19.404 "data_offset": 0, 00:13:19.404 "data_size": 65536 00:13:19.404 }, 00:13:19.404 { 00:13:19.404 "name": "BaseBdev2", 00:13:19.404 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:19.404 "is_configured": true, 00:13:19.404 "data_offset": 0, 00:13:19.404 "data_size": 65536 00:13:19.404 }, 00:13:19.404 { 00:13:19.404 "name": "BaseBdev3", 00:13:19.404 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:19.404 "is_configured": true, 00:13:19.404 "data_offset": 0, 00:13:19.404 "data_size": 65536 00:13:19.404 } 00:13:19.404 ] 00:13:19.404 }' 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.404 15:29:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.976 [2024-11-26 15:29:18.153032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.976 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:19.976 "name": "Existed_Raid", 00:13:19.976 "aliases": [ 00:13:19.976 "fb3b0b03-0c21-4bbc-ab5b-2af72ef0b580" 00:13:19.976 ], 00:13:19.976 "product_name": "Raid Volume", 00:13:19.976 "block_size": 512, 00:13:19.976 "num_blocks": 131072, 00:13:19.976 "uuid": "fb3b0b03-0c21-4bbc-ab5b-2af72ef0b580", 00:13:19.976 "assigned_rate_limits": { 00:13:19.976 "rw_ios_per_sec": 0, 00:13:19.976 "rw_mbytes_per_sec": 0, 00:13:19.976 "r_mbytes_per_sec": 0, 00:13:19.976 "w_mbytes_per_sec": 0 00:13:19.976 }, 00:13:19.976 "claimed": false, 00:13:19.976 "zoned": false, 00:13:19.976 "supported_io_types": { 00:13:19.976 "read": true, 00:13:19.976 "write": true, 00:13:19.976 "unmap": false, 00:13:19.976 "flush": false, 00:13:19.976 "reset": true, 00:13:19.976 "nvme_admin": false, 00:13:19.976 "nvme_io": false, 00:13:19.976 "nvme_io_md": false, 00:13:19.976 "write_zeroes": true, 00:13:19.976 "zcopy": false, 00:13:19.976 "get_zone_info": false, 00:13:19.976 "zone_management": false, 00:13:19.976 "zone_append": false, 00:13:19.976 "compare": false, 00:13:19.976 "compare_and_write": false, 00:13:19.976 "abort": false, 00:13:19.976 "seek_hole": false, 00:13:19.976 "seek_data": false, 00:13:19.976 "copy": false, 00:13:19.976 "nvme_iov_md": false 00:13:19.976 }, 00:13:19.976 "driver_specific": { 00:13:19.976 "raid": { 00:13:19.976 "uuid": "fb3b0b03-0c21-4bbc-ab5b-2af72ef0b580", 00:13:19.976 "strip_size_kb": 64, 00:13:19.976 "state": "online", 00:13:19.976 "raid_level": "raid5f", 00:13:19.976 "superblock": false, 00:13:19.976 "num_base_bdevs": 3, 00:13:19.976 "num_base_bdevs_discovered": 3, 00:13:19.976 "num_base_bdevs_operational": 3, 00:13:19.976 "base_bdevs_list": [ 00:13:19.976 { 00:13:19.976 "name": "NewBaseBdev", 00:13:19.976 "uuid": "c1a138cb-af3e-48c9-a0ab-bbc8344353f8", 00:13:19.977 "is_configured": true, 00:13:19.977 "data_offset": 0, 00:13:19.977 "data_size": 65536 00:13:19.977 }, 00:13:19.977 { 00:13:19.977 "name": "BaseBdev2", 00:13:19.977 "uuid": "c25b1fe2-6677-4839-9d4b-5824ae2b83ae", 00:13:19.977 "is_configured": true, 00:13:19.977 "data_offset": 0, 00:13:19.977 "data_size": 65536 00:13:19.977 }, 00:13:19.977 { 00:13:19.977 "name": "BaseBdev3", 00:13:19.977 "uuid": "40ae4063-dcb1-4e9d-94f4-c05a55343660", 00:13:19.977 "is_configured": true, 00:13:19.977 "data_offset": 0, 00:13:19.977 "data_size": 65536 00:13:19.977 } 00:13:19.977 ] 00:13:19.977 } 00:13:19.977 } 00:13:19.977 }' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:19.977 BaseBdev2 00:13:19.977 BaseBdev3' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.977 [2024-11-26 15:29:18.400876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.977 [2024-11-26 15:29:18.400946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.977 [2024-11-26 15:29:18.401012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.977 [2024-11-26 15:29:18.401295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.977 [2024-11-26 15:29:18.401307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 91966 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 91966 ']' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 91966 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91966 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91966' 00:13:19.977 killing process with pid 91966 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 91966 00:13:19.977 [2024-11-26 15:29:18.443290] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.977 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 91966 00:13:20.237 [2024-11-26 15:29:18.474410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.237 15:29:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:20.237 00:13:20.237 real 0m8.468s 00:13:20.237 user 0m14.406s 00:13:20.237 sys 0m1.818s 00:13:20.237 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.237 ************************************ 00:13:20.237 END TEST raid5f_state_function_test 00:13:20.237 ************************************ 00:13:20.237 15:29:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.497 15:29:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:20.497 15:29:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:20.497 15:29:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.497 15:29:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.497 ************************************ 00:13:20.497 START TEST raid5f_state_function_test_sb 00:13:20.497 ************************************ 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.497 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=92566 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92566' 00:13:20.498 Process raid pid: 92566 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 92566 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92566 ']' 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.498 15:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.498 [2024-11-26 15:29:18.855723] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:20.498 [2024-11-26 15:29:18.855948] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.758 [2024-11-26 15:29:18.992128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.758 [2024-11-26 15:29:19.029369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.758 [2024-11-26 15:29:19.055478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.758 [2024-11-26 15:29:19.099135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.758 [2024-11-26 15:29:19.099168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.328 [2024-11-26 15:29:19.678295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.328 [2024-11-26 15:29:19.678339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.328 [2024-11-26 15:29:19.678353] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.328 [2024-11-26 15:29:19.678361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.328 [2024-11-26 15:29:19.678373] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.328 [2024-11-26 15:29:19.678380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.328 "name": "Existed_Raid", 00:13:21.328 "uuid": "22fe870a-adc2-4989-8e2e-609694fd5bc4", 00:13:21.328 "strip_size_kb": 64, 00:13:21.328 "state": "configuring", 00:13:21.328 "raid_level": "raid5f", 00:13:21.328 "superblock": true, 00:13:21.328 "num_base_bdevs": 3, 00:13:21.328 "num_base_bdevs_discovered": 0, 00:13:21.328 "num_base_bdevs_operational": 3, 00:13:21.328 "base_bdevs_list": [ 00:13:21.328 { 00:13:21.328 "name": "BaseBdev1", 00:13:21.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.328 "is_configured": false, 00:13:21.328 "data_offset": 0, 00:13:21.328 "data_size": 0 00:13:21.328 }, 00:13:21.328 { 00:13:21.328 "name": "BaseBdev2", 00:13:21.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.328 "is_configured": false, 00:13:21.328 "data_offset": 0, 00:13:21.328 "data_size": 0 00:13:21.328 }, 00:13:21.328 { 00:13:21.328 "name": "BaseBdev3", 00:13:21.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.328 "is_configured": false, 00:13:21.328 "data_offset": 0, 00:13:21.328 "data_size": 0 00:13:21.328 } 00:13:21.328 ] 00:13:21.328 }' 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.328 15:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.907 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.907 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.907 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.907 [2024-11-26 15:29:20.122300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.907 [2024-11-26 15:29:20.122371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:13:21.907 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.907 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.907 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.907 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.908 [2024-11-26 15:29:20.134367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.908 [2024-11-26 15:29:20.134439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.908 [2024-11-26 15:29:20.134482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.908 [2024-11-26 15:29:20.134502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.908 [2024-11-26 15:29:20.134522] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.908 [2024-11-26 15:29:20.134543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.908 [2024-11-26 15:29:20.155125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.908 BaseBdev1 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.908 [ 00:13:21.908 { 00:13:21.908 "name": "BaseBdev1", 00:13:21.908 "aliases": [ 00:13:21.908 "c0485794-dca2-4ce7-a1b4-959a67b54650" 00:13:21.908 ], 00:13:21.908 "product_name": "Malloc disk", 00:13:21.908 "block_size": 512, 00:13:21.908 "num_blocks": 65536, 00:13:21.908 "uuid": "c0485794-dca2-4ce7-a1b4-959a67b54650", 00:13:21.908 "assigned_rate_limits": { 00:13:21.908 "rw_ios_per_sec": 0, 00:13:21.908 "rw_mbytes_per_sec": 0, 00:13:21.908 "r_mbytes_per_sec": 0, 00:13:21.908 "w_mbytes_per_sec": 0 00:13:21.908 }, 00:13:21.908 "claimed": true, 00:13:21.908 "claim_type": "exclusive_write", 00:13:21.908 "zoned": false, 00:13:21.908 "supported_io_types": { 00:13:21.908 "read": true, 00:13:21.908 "write": true, 00:13:21.908 "unmap": true, 00:13:21.908 "flush": true, 00:13:21.908 "reset": true, 00:13:21.908 "nvme_admin": false, 00:13:21.908 "nvme_io": false, 00:13:21.908 "nvme_io_md": false, 00:13:21.908 "write_zeroes": true, 00:13:21.908 "zcopy": true, 00:13:21.908 "get_zone_info": false, 00:13:21.908 "zone_management": false, 00:13:21.908 "zone_append": false, 00:13:21.908 "compare": false, 00:13:21.908 "compare_and_write": false, 00:13:21.908 "abort": true, 00:13:21.908 "seek_hole": false, 00:13:21.908 "seek_data": false, 00:13:21.908 "copy": true, 00:13:21.908 "nvme_iov_md": false 00:13:21.908 }, 00:13:21.908 "memory_domains": [ 00:13:21.908 { 00:13:21.908 "dma_device_id": "system", 00:13:21.908 "dma_device_type": 1 00:13:21.908 }, 00:13:21.908 { 00:13:21.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.908 "dma_device_type": 2 00:13:21.908 } 00:13:21.908 ], 00:13:21.908 "driver_specific": {} 00:13:21.908 } 00:13:21.908 ] 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.908 "name": "Existed_Raid", 00:13:21.908 "uuid": "1eb3e9ff-a2ac-4e7b-98fe-4bd178d162c8", 00:13:21.908 "strip_size_kb": 64, 00:13:21.908 "state": "configuring", 00:13:21.908 "raid_level": "raid5f", 00:13:21.908 "superblock": true, 00:13:21.908 "num_base_bdevs": 3, 00:13:21.908 "num_base_bdevs_discovered": 1, 00:13:21.908 "num_base_bdevs_operational": 3, 00:13:21.908 "base_bdevs_list": [ 00:13:21.908 { 00:13:21.908 "name": "BaseBdev1", 00:13:21.908 "uuid": "c0485794-dca2-4ce7-a1b4-959a67b54650", 00:13:21.908 "is_configured": true, 00:13:21.908 "data_offset": 2048, 00:13:21.908 "data_size": 63488 00:13:21.908 }, 00:13:21.908 { 00:13:21.908 "name": "BaseBdev2", 00:13:21.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.908 "is_configured": false, 00:13:21.908 "data_offset": 0, 00:13:21.908 "data_size": 0 00:13:21.908 }, 00:13:21.908 { 00:13:21.908 "name": "BaseBdev3", 00:13:21.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.908 "is_configured": false, 00:13:21.908 "data_offset": 0, 00:13:21.908 "data_size": 0 00:13:21.908 } 00:13:21.908 ] 00:13:21.908 }' 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.908 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 [2024-11-26 15:29:20.607280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.207 [2024-11-26 15:29:20.607328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 [2024-11-26 15:29:20.619326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.207 [2024-11-26 15:29:20.621118] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.207 [2024-11-26 15:29:20.621157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.207 [2024-11-26 15:29:20.621171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.207 [2024-11-26 15:29:20.621194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.207 "name": "Existed_Raid", 00:13:22.207 "uuid": "c720b6a2-dbb4-46da-a160-8fe9ee4a2d06", 00:13:22.207 "strip_size_kb": 64, 00:13:22.207 "state": "configuring", 00:13:22.207 "raid_level": "raid5f", 00:13:22.207 "superblock": true, 00:13:22.207 "num_base_bdevs": 3, 00:13:22.207 "num_base_bdevs_discovered": 1, 00:13:22.207 "num_base_bdevs_operational": 3, 00:13:22.207 "base_bdevs_list": [ 00:13:22.207 { 00:13:22.207 "name": "BaseBdev1", 00:13:22.207 "uuid": "c0485794-dca2-4ce7-a1b4-959a67b54650", 00:13:22.207 "is_configured": true, 00:13:22.207 "data_offset": 2048, 00:13:22.207 "data_size": 63488 00:13:22.207 }, 00:13:22.207 { 00:13:22.207 "name": "BaseBdev2", 00:13:22.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.207 "is_configured": false, 00:13:22.207 "data_offset": 0, 00:13:22.207 "data_size": 0 00:13:22.207 }, 00:13:22.207 { 00:13:22.207 "name": "BaseBdev3", 00:13:22.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.207 "is_configured": false, 00:13:22.207 "data_offset": 0, 00:13:22.207 "data_size": 0 00:13:22.207 } 00:13:22.207 ] 00:13:22.207 }' 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.207 15:29:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.777 [2024-11-26 15:29:21.030304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.777 BaseBdev2 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.777 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.777 [ 00:13:22.777 { 00:13:22.777 "name": "BaseBdev2", 00:13:22.777 "aliases": [ 00:13:22.777 "e18e81e6-3bca-4f68-8d4b-e0040dd6bfd4" 00:13:22.777 ], 00:13:22.778 "product_name": "Malloc disk", 00:13:22.778 "block_size": 512, 00:13:22.778 "num_blocks": 65536, 00:13:22.778 "uuid": "e18e81e6-3bca-4f68-8d4b-e0040dd6bfd4", 00:13:22.778 "assigned_rate_limits": { 00:13:22.778 "rw_ios_per_sec": 0, 00:13:22.778 "rw_mbytes_per_sec": 0, 00:13:22.778 "r_mbytes_per_sec": 0, 00:13:22.778 "w_mbytes_per_sec": 0 00:13:22.778 }, 00:13:22.778 "claimed": true, 00:13:22.778 "claim_type": "exclusive_write", 00:13:22.778 "zoned": false, 00:13:22.778 "supported_io_types": { 00:13:22.778 "read": true, 00:13:22.778 "write": true, 00:13:22.778 "unmap": true, 00:13:22.778 "flush": true, 00:13:22.778 "reset": true, 00:13:22.778 "nvme_admin": false, 00:13:22.778 "nvme_io": false, 00:13:22.778 "nvme_io_md": false, 00:13:22.778 "write_zeroes": true, 00:13:22.778 "zcopy": true, 00:13:22.778 "get_zone_info": false, 00:13:22.778 "zone_management": false, 00:13:22.778 "zone_append": false, 00:13:22.778 "compare": false, 00:13:22.778 "compare_and_write": false, 00:13:22.778 "abort": true, 00:13:22.778 "seek_hole": false, 00:13:22.778 "seek_data": false, 00:13:22.778 "copy": true, 00:13:22.778 "nvme_iov_md": false 00:13:22.778 }, 00:13:22.778 "memory_domains": [ 00:13:22.778 { 00:13:22.778 "dma_device_id": "system", 00:13:22.778 "dma_device_type": 1 00:13:22.778 }, 00:13:22.778 { 00:13:22.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.778 "dma_device_type": 2 00:13:22.778 } 00:13:22.778 ], 00:13:22.778 "driver_specific": {} 00:13:22.778 } 00:13:22.778 ] 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.778 "name": "Existed_Raid", 00:13:22.778 "uuid": "c720b6a2-dbb4-46da-a160-8fe9ee4a2d06", 00:13:22.778 "strip_size_kb": 64, 00:13:22.778 "state": "configuring", 00:13:22.778 "raid_level": "raid5f", 00:13:22.778 "superblock": true, 00:13:22.778 "num_base_bdevs": 3, 00:13:22.778 "num_base_bdevs_discovered": 2, 00:13:22.778 "num_base_bdevs_operational": 3, 00:13:22.778 "base_bdevs_list": [ 00:13:22.778 { 00:13:22.778 "name": "BaseBdev1", 00:13:22.778 "uuid": "c0485794-dca2-4ce7-a1b4-959a67b54650", 00:13:22.778 "is_configured": true, 00:13:22.778 "data_offset": 2048, 00:13:22.778 "data_size": 63488 00:13:22.778 }, 00:13:22.778 { 00:13:22.778 "name": "BaseBdev2", 00:13:22.778 "uuid": "e18e81e6-3bca-4f68-8d4b-e0040dd6bfd4", 00:13:22.778 "is_configured": true, 00:13:22.778 "data_offset": 2048, 00:13:22.778 "data_size": 63488 00:13:22.778 }, 00:13:22.778 { 00:13:22.778 "name": "BaseBdev3", 00:13:22.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.778 "is_configured": false, 00:13:22.778 "data_offset": 0, 00:13:22.778 "data_size": 0 00:13:22.778 } 00:13:22.778 ] 00:13:22.778 }' 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.778 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.349 [2024-11-26 15:29:21.535979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.349 [2024-11-26 15:29:21.536591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:23.349 [2024-11-26 15:29:21.536683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:23.349 BaseBdev3 00:13:23.349 [2024-11-26 15:29:21.537707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:23.349 [2024-11-26 15:29:21.539370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:23.349 [2024-11-26 15:29:21.539597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.349 id_bdev 0x617000007b00 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.349 [2024-11-26 15:29:21.540295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.349 [ 00:13:23.349 { 00:13:23.349 "name": "BaseBdev3", 00:13:23.349 "aliases": [ 00:13:23.349 "e19c1aae-be60-4381-b3b2-89306dc37b96" 00:13:23.349 ], 00:13:23.349 "product_name": "Malloc disk", 00:13:23.349 "block_size": 512, 00:13:23.349 "num_blocks": 65536, 00:13:23.349 "uuid": "e19c1aae-be60-4381-b3b2-89306dc37b96", 00:13:23.349 "assigned_rate_limits": { 00:13:23.349 "rw_ios_per_sec": 0, 00:13:23.349 "rw_mbytes_per_sec": 0, 00:13:23.349 "r_mbytes_per_sec": 0, 00:13:23.349 "w_mbytes_per_sec": 0 00:13:23.349 }, 00:13:23.349 "claimed": true, 00:13:23.349 "claim_type": "exclusive_write", 00:13:23.349 "zoned": false, 00:13:23.349 "supported_io_types": { 00:13:23.349 "read": true, 00:13:23.349 "write": true, 00:13:23.349 "unmap": true, 00:13:23.349 "flush": true, 00:13:23.349 "reset": true, 00:13:23.349 "nvme_admin": false, 00:13:23.349 "nvme_io": false, 00:13:23.349 "nvme_io_md": false, 00:13:23.349 "write_zeroes": true, 00:13:23.349 "zcopy": true, 00:13:23.349 "get_zone_info": false, 00:13:23.349 "zone_management": false, 00:13:23.349 "zone_append": false, 00:13:23.349 "compare": false, 00:13:23.349 "compare_and_write": false, 00:13:23.349 "abort": true, 00:13:23.349 "seek_hole": false, 00:13:23.349 "seek_data": false, 00:13:23.349 "copy": true, 00:13:23.349 "nvme_iov_md": false 00:13:23.349 }, 00:13:23.349 "memory_domains": [ 00:13:23.349 { 00:13:23.349 "dma_device_id": "system", 00:13:23.349 "dma_device_type": 1 00:13:23.349 }, 00:13:23.349 { 00:13:23.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.349 "dma_device_type": 2 00:13:23.349 } 00:13:23.349 ], 00:13:23.349 "driver_specific": {} 00:13:23.349 } 00:13:23.349 ] 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.349 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.349 "name": "Existed_Raid", 00:13:23.349 "uuid": "c720b6a2-dbb4-46da-a160-8fe9ee4a2d06", 00:13:23.349 "strip_size_kb": 64, 00:13:23.349 "state": "online", 00:13:23.349 "raid_level": "raid5f", 00:13:23.349 "superblock": true, 00:13:23.349 "num_base_bdevs": 3, 00:13:23.349 "num_base_bdevs_discovered": 3, 00:13:23.349 "num_base_bdevs_operational": 3, 00:13:23.349 "base_bdevs_list": [ 00:13:23.349 { 00:13:23.349 "name": "BaseBdev1", 00:13:23.349 "uuid": "c0485794-dca2-4ce7-a1b4-959a67b54650", 00:13:23.349 "is_configured": true, 00:13:23.349 "data_offset": 2048, 00:13:23.349 "data_size": 63488 00:13:23.349 }, 00:13:23.349 { 00:13:23.349 "name": "BaseBdev2", 00:13:23.349 "uuid": "e18e81e6-3bca-4f68-8d4b-e0040dd6bfd4", 00:13:23.349 "is_configured": true, 00:13:23.349 "data_offset": 2048, 00:13:23.349 "data_size": 63488 00:13:23.349 }, 00:13:23.349 { 00:13:23.349 "name": "BaseBdev3", 00:13:23.349 "uuid": "e19c1aae-be60-4381-b3b2-89306dc37b96", 00:13:23.349 "is_configured": true, 00:13:23.349 "data_offset": 2048, 00:13:23.349 "data_size": 63488 00:13:23.349 } 00:13:23.349 ] 00:13:23.349 }' 00:13:23.350 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.350 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:23.610 15:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.610 [2024-11-26 15:29:21.992470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.610 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.610 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:23.610 "name": "Existed_Raid", 00:13:23.610 "aliases": [ 00:13:23.610 "c720b6a2-dbb4-46da-a160-8fe9ee4a2d06" 00:13:23.610 ], 00:13:23.610 "product_name": "Raid Volume", 00:13:23.610 "block_size": 512, 00:13:23.610 "num_blocks": 126976, 00:13:23.610 "uuid": "c720b6a2-dbb4-46da-a160-8fe9ee4a2d06", 00:13:23.610 "assigned_rate_limits": { 00:13:23.610 "rw_ios_per_sec": 0, 00:13:23.610 "rw_mbytes_per_sec": 0, 00:13:23.610 "r_mbytes_per_sec": 0, 00:13:23.610 "w_mbytes_per_sec": 0 00:13:23.610 }, 00:13:23.610 "claimed": false, 00:13:23.610 "zoned": false, 00:13:23.610 "supported_io_types": { 00:13:23.610 "read": true, 00:13:23.610 "write": true, 00:13:23.610 "unmap": false, 00:13:23.610 "flush": false, 00:13:23.610 "reset": true, 00:13:23.610 "nvme_admin": false, 00:13:23.610 "nvme_io": false, 00:13:23.610 "nvme_io_md": false, 00:13:23.610 "write_zeroes": true, 00:13:23.610 "zcopy": false, 00:13:23.610 "get_zone_info": false, 00:13:23.610 "zone_management": false, 00:13:23.610 "zone_append": false, 00:13:23.610 "compare": false, 00:13:23.610 "compare_and_write": false, 00:13:23.610 "abort": false, 00:13:23.610 "seek_hole": false, 00:13:23.610 "seek_data": false, 00:13:23.610 "copy": false, 00:13:23.610 "nvme_iov_md": false 00:13:23.610 }, 00:13:23.610 "driver_specific": { 00:13:23.610 "raid": { 00:13:23.610 "uuid": "c720b6a2-dbb4-46da-a160-8fe9ee4a2d06", 00:13:23.610 "strip_size_kb": 64, 00:13:23.610 "state": "online", 00:13:23.610 "raid_level": "raid5f", 00:13:23.610 "superblock": true, 00:13:23.610 "num_base_bdevs": 3, 00:13:23.610 "num_base_bdevs_discovered": 3, 00:13:23.610 "num_base_bdevs_operational": 3, 00:13:23.610 "base_bdevs_list": [ 00:13:23.610 { 00:13:23.610 "name": "BaseBdev1", 00:13:23.610 "uuid": "c0485794-dca2-4ce7-a1b4-959a67b54650", 00:13:23.610 "is_configured": true, 00:13:23.610 "data_offset": 2048, 00:13:23.610 "data_size": 63488 00:13:23.610 }, 00:13:23.610 { 00:13:23.610 "name": "BaseBdev2", 00:13:23.610 "uuid": "e18e81e6-3bca-4f68-8d4b-e0040dd6bfd4", 00:13:23.610 "is_configured": true, 00:13:23.610 "data_offset": 2048, 00:13:23.610 "data_size": 63488 00:13:23.610 }, 00:13:23.610 { 00:13:23.610 "name": "BaseBdev3", 00:13:23.610 "uuid": "e19c1aae-be60-4381-b3b2-89306dc37b96", 00:13:23.610 "is_configured": true, 00:13:23.610 "data_offset": 2048, 00:13:23.610 "data_size": 63488 00:13:23.610 } 00:13:23.610 ] 00:13:23.610 } 00:13:23.610 } 00:13:23.610 }' 00:13:23.610 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.610 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:23.610 BaseBdev2 00:13:23.610 BaseBdev3' 00:13:23.610 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.870 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.871 [2024-11-26 15:29:22.268404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.871 "name": "Existed_Raid", 00:13:23.871 "uuid": "c720b6a2-dbb4-46da-a160-8fe9ee4a2d06", 00:13:23.871 "strip_size_kb": 64, 00:13:23.871 "state": "online", 00:13:23.871 "raid_level": "raid5f", 00:13:23.871 "superblock": true, 00:13:23.871 "num_base_bdevs": 3, 00:13:23.871 "num_base_bdevs_discovered": 2, 00:13:23.871 "num_base_bdevs_operational": 2, 00:13:23.871 "base_bdevs_list": [ 00:13:23.871 { 00:13:23.871 "name": null, 00:13:23.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.871 "is_configured": false, 00:13:23.871 "data_offset": 0, 00:13:23.871 "data_size": 63488 00:13:23.871 }, 00:13:23.871 { 00:13:23.871 "name": "BaseBdev2", 00:13:23.871 "uuid": "e18e81e6-3bca-4f68-8d4b-e0040dd6bfd4", 00:13:23.871 "is_configured": true, 00:13:23.871 "data_offset": 2048, 00:13:23.871 "data_size": 63488 00:13:23.871 }, 00:13:23.871 { 00:13:23.871 "name": "BaseBdev3", 00:13:23.871 "uuid": "e19c1aae-be60-4381-b3b2-89306dc37b96", 00:13:23.871 "is_configured": true, 00:13:23.871 "data_offset": 2048, 00:13:23.871 "data_size": 63488 00:13:23.871 } 00:13:23.871 ] 00:13:23.871 }' 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.871 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.441 [2024-11-26 15:29:22.791793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.441 [2024-11-26 15:29:22.791922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.441 [2024-11-26 15:29:22.803351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.441 [2024-11-26 15:29:22.863397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.441 [2024-11-26 15:29:22.863447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.441 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.702 BaseBdev2 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.702 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.702 [ 00:13:24.702 { 00:13:24.702 "name": "BaseBdev2", 00:13:24.702 "aliases": [ 00:13:24.702 "0848cbb2-ebd9-4730-985b-a059e6ecc575" 00:13:24.702 ], 00:13:24.702 "product_name": "Malloc disk", 00:13:24.702 "block_size": 512, 00:13:24.702 "num_blocks": 65536, 00:13:24.702 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:24.702 "assigned_rate_limits": { 00:13:24.702 "rw_ios_per_sec": 0, 00:13:24.702 "rw_mbytes_per_sec": 0, 00:13:24.702 "r_mbytes_per_sec": 0, 00:13:24.702 "w_mbytes_per_sec": 0 00:13:24.702 }, 00:13:24.702 "claimed": false, 00:13:24.702 "zoned": false, 00:13:24.702 "supported_io_types": { 00:13:24.702 "read": true, 00:13:24.702 "write": true, 00:13:24.702 "unmap": true, 00:13:24.702 "flush": true, 00:13:24.702 "reset": true, 00:13:24.702 "nvme_admin": false, 00:13:24.702 "nvme_io": false, 00:13:24.702 "nvme_io_md": false, 00:13:24.702 "write_zeroes": true, 00:13:24.703 "zcopy": true, 00:13:24.703 "get_zone_info": false, 00:13:24.703 "zone_management": false, 00:13:24.703 "zone_append": false, 00:13:24.703 "compare": false, 00:13:24.703 "compare_and_write": false, 00:13:24.703 "abort": true, 00:13:24.703 "seek_hole": false, 00:13:24.703 "seek_data": false, 00:13:24.703 "copy": true, 00:13:24.703 "nvme_iov_md": false 00:13:24.703 }, 00:13:24.703 "memory_domains": [ 00:13:24.703 { 00:13:24.703 "dma_device_id": "system", 00:13:24.703 "dma_device_type": 1 00:13:24.703 }, 00:13:24.703 { 00:13:24.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.703 "dma_device_type": 2 00:13:24.703 } 00:13:24.703 ], 00:13:24.703 "driver_specific": {} 00:13:24.703 } 00:13:24.703 ] 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.703 BaseBdev3 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.703 15:29:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.703 [ 00:13:24.703 { 00:13:24.703 "name": "BaseBdev3", 00:13:24.703 "aliases": [ 00:13:24.703 "31536fe9-13e8-440b-992b-c4255a777385" 00:13:24.703 ], 00:13:24.703 "product_name": "Malloc disk", 00:13:24.703 "block_size": 512, 00:13:24.703 "num_blocks": 65536, 00:13:24.703 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:24.703 "assigned_rate_limits": { 00:13:24.703 "rw_ios_per_sec": 0, 00:13:24.703 "rw_mbytes_per_sec": 0, 00:13:24.703 "r_mbytes_per_sec": 0, 00:13:24.703 "w_mbytes_per_sec": 0 00:13:24.703 }, 00:13:24.703 "claimed": false, 00:13:24.703 "zoned": false, 00:13:24.703 "supported_io_types": { 00:13:24.703 "read": true, 00:13:24.703 "write": true, 00:13:24.703 "unmap": true, 00:13:24.703 "flush": true, 00:13:24.703 "reset": true, 00:13:24.703 "nvme_admin": false, 00:13:24.703 "nvme_io": false, 00:13:24.703 "nvme_io_md": false, 00:13:24.703 "write_zeroes": true, 00:13:24.703 "zcopy": true, 00:13:24.703 "get_zone_info": false, 00:13:24.703 "zone_management": false, 00:13:24.703 "zone_append": false, 00:13:24.703 "compare": false, 00:13:24.703 "compare_and_write": false, 00:13:24.703 "abort": true, 00:13:24.703 "seek_hole": false, 00:13:24.703 "seek_data": false, 00:13:24.703 "copy": true, 00:13:24.703 "nvme_iov_md": false 00:13:24.703 }, 00:13:24.703 "memory_domains": [ 00:13:24.703 { 00:13:24.703 "dma_device_id": "system", 00:13:24.703 "dma_device_type": 1 00:13:24.703 }, 00:13:24.703 { 00:13:24.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.703 "dma_device_type": 2 00:13:24.703 } 00:13:24.703 ], 00:13:24.703 "driver_specific": {} 00:13:24.703 } 00:13:24.703 ] 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.703 [2024-11-26 15:29:23.038467] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:24.703 [2024-11-26 15:29:23.038553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:24.703 [2024-11-26 15:29:23.038590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.703 [2024-11-26 15:29:23.040347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.703 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.703 "name": "Existed_Raid", 00:13:24.703 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:24.703 "strip_size_kb": 64, 00:13:24.703 "state": "configuring", 00:13:24.703 "raid_level": "raid5f", 00:13:24.703 "superblock": true, 00:13:24.703 "num_base_bdevs": 3, 00:13:24.703 "num_base_bdevs_discovered": 2, 00:13:24.703 "num_base_bdevs_operational": 3, 00:13:24.703 "base_bdevs_list": [ 00:13:24.703 { 00:13:24.703 "name": "BaseBdev1", 00:13:24.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.703 "is_configured": false, 00:13:24.703 "data_offset": 0, 00:13:24.703 "data_size": 0 00:13:24.703 }, 00:13:24.703 { 00:13:24.703 "name": "BaseBdev2", 00:13:24.703 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:24.703 "is_configured": true, 00:13:24.703 "data_offset": 2048, 00:13:24.703 "data_size": 63488 00:13:24.703 }, 00:13:24.703 { 00:13:24.703 "name": "BaseBdev3", 00:13:24.703 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:24.703 "is_configured": true, 00:13:24.703 "data_offset": 2048, 00:13:24.703 "data_size": 63488 00:13:24.704 } 00:13:24.704 ] 00:13:24.704 }' 00:13:24.704 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.704 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.274 [2024-11-26 15:29:23.498588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.274 "name": "Existed_Raid", 00:13:25.274 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:25.274 "strip_size_kb": 64, 00:13:25.274 "state": "configuring", 00:13:25.274 "raid_level": "raid5f", 00:13:25.274 "superblock": true, 00:13:25.274 "num_base_bdevs": 3, 00:13:25.274 "num_base_bdevs_discovered": 1, 00:13:25.274 "num_base_bdevs_operational": 3, 00:13:25.274 "base_bdevs_list": [ 00:13:25.274 { 00:13:25.274 "name": "BaseBdev1", 00:13:25.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.274 "is_configured": false, 00:13:25.274 "data_offset": 0, 00:13:25.274 "data_size": 0 00:13:25.274 }, 00:13:25.274 { 00:13:25.274 "name": null, 00:13:25.274 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:25.274 "is_configured": false, 00:13:25.274 "data_offset": 0, 00:13:25.274 "data_size": 63488 00:13:25.274 }, 00:13:25.274 { 00:13:25.274 "name": "BaseBdev3", 00:13:25.274 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:25.274 "is_configured": true, 00:13:25.274 "data_offset": 2048, 00:13:25.274 "data_size": 63488 00:13:25.274 } 00:13:25.274 ] 00:13:25.274 }' 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.274 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.534 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.534 [2024-11-26 15:29:23.973579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.534 BaseBdev1 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.535 15:29:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.535 [ 00:13:25.535 { 00:13:25.535 "name": "BaseBdev1", 00:13:25.535 "aliases": [ 00:13:25.535 "3bc6cd06-d027-4a80-8141-0a653587008d" 00:13:25.535 ], 00:13:25.535 "product_name": "Malloc disk", 00:13:25.535 "block_size": 512, 00:13:25.535 "num_blocks": 65536, 00:13:25.535 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:25.535 "assigned_rate_limits": { 00:13:25.535 "rw_ios_per_sec": 0, 00:13:25.535 "rw_mbytes_per_sec": 0, 00:13:25.535 "r_mbytes_per_sec": 0, 00:13:25.535 "w_mbytes_per_sec": 0 00:13:25.535 }, 00:13:25.535 "claimed": true, 00:13:25.535 "claim_type": "exclusive_write", 00:13:25.535 "zoned": false, 00:13:25.535 "supported_io_types": { 00:13:25.535 "read": true, 00:13:25.535 "write": true, 00:13:25.535 "unmap": true, 00:13:25.535 "flush": true, 00:13:25.535 "reset": true, 00:13:25.535 "nvme_admin": false, 00:13:25.535 "nvme_io": false, 00:13:25.535 "nvme_io_md": false, 00:13:25.535 "write_zeroes": true, 00:13:25.535 "zcopy": true, 00:13:25.535 "get_zone_info": false, 00:13:25.535 "zone_management": false, 00:13:25.535 "zone_append": false, 00:13:25.535 "compare": false, 00:13:25.535 "compare_and_write": false, 00:13:25.795 "abort": true, 00:13:25.795 "seek_hole": false, 00:13:25.795 "seek_data": false, 00:13:25.795 "copy": true, 00:13:25.795 "nvme_iov_md": false 00:13:25.795 }, 00:13:25.795 "memory_domains": [ 00:13:25.795 { 00:13:25.795 "dma_device_id": "system", 00:13:25.795 "dma_device_type": 1 00:13:25.795 }, 00:13:25.795 { 00:13:25.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.795 "dma_device_type": 2 00:13:25.795 } 00:13:25.795 ], 00:13:25.795 "driver_specific": {} 00:13:25.795 } 00:13:25.795 ] 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.795 "name": "Existed_Raid", 00:13:25.795 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:25.795 "strip_size_kb": 64, 00:13:25.795 "state": "configuring", 00:13:25.795 "raid_level": "raid5f", 00:13:25.795 "superblock": true, 00:13:25.795 "num_base_bdevs": 3, 00:13:25.795 "num_base_bdevs_discovered": 2, 00:13:25.795 "num_base_bdevs_operational": 3, 00:13:25.795 "base_bdevs_list": [ 00:13:25.795 { 00:13:25.795 "name": "BaseBdev1", 00:13:25.795 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:25.795 "is_configured": true, 00:13:25.795 "data_offset": 2048, 00:13:25.795 "data_size": 63488 00:13:25.795 }, 00:13:25.795 { 00:13:25.795 "name": null, 00:13:25.795 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:25.795 "is_configured": false, 00:13:25.795 "data_offset": 0, 00:13:25.795 "data_size": 63488 00:13:25.795 }, 00:13:25.795 { 00:13:25.795 "name": "BaseBdev3", 00:13:25.795 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:25.795 "is_configured": true, 00:13:25.795 "data_offset": 2048, 00:13:25.795 "data_size": 63488 00:13:25.795 } 00:13:25.795 ] 00:13:25.795 }' 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.795 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.055 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.055 [2024-11-26 15:29:24.525773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.316 "name": "Existed_Raid", 00:13:26.316 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:26.316 "strip_size_kb": 64, 00:13:26.316 "state": "configuring", 00:13:26.316 "raid_level": "raid5f", 00:13:26.316 "superblock": true, 00:13:26.316 "num_base_bdevs": 3, 00:13:26.316 "num_base_bdevs_discovered": 1, 00:13:26.316 "num_base_bdevs_operational": 3, 00:13:26.316 "base_bdevs_list": [ 00:13:26.316 { 00:13:26.316 "name": "BaseBdev1", 00:13:26.316 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:26.316 "is_configured": true, 00:13:26.316 "data_offset": 2048, 00:13:26.316 "data_size": 63488 00:13:26.316 }, 00:13:26.316 { 00:13:26.316 "name": null, 00:13:26.316 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:26.316 "is_configured": false, 00:13:26.316 "data_offset": 0, 00:13:26.316 "data_size": 63488 00:13:26.316 }, 00:13:26.316 { 00:13:26.316 "name": null, 00:13:26.316 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:26.316 "is_configured": false, 00:13:26.316 "data_offset": 0, 00:13:26.316 "data_size": 63488 00:13:26.316 } 00:13:26.316 ] 00:13:26.316 }' 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.316 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.576 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.576 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.576 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:26.576 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 [2024-11-26 15:29:24.989938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.577 15:29:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.577 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.577 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.577 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.577 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.577 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.577 "name": "Existed_Raid", 00:13:26.577 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:26.577 "strip_size_kb": 64, 00:13:26.577 "state": "configuring", 00:13:26.577 "raid_level": "raid5f", 00:13:26.577 "superblock": true, 00:13:26.577 "num_base_bdevs": 3, 00:13:26.577 "num_base_bdevs_discovered": 2, 00:13:26.577 "num_base_bdevs_operational": 3, 00:13:26.577 "base_bdevs_list": [ 00:13:26.577 { 00:13:26.577 "name": "BaseBdev1", 00:13:26.577 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:26.577 "is_configured": true, 00:13:26.577 "data_offset": 2048, 00:13:26.577 "data_size": 63488 00:13:26.577 }, 00:13:26.577 { 00:13:26.577 "name": null, 00:13:26.577 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:26.577 "is_configured": false, 00:13:26.577 "data_offset": 0, 00:13:26.577 "data_size": 63488 00:13:26.577 }, 00:13:26.577 { 00:13:26.577 "name": "BaseBdev3", 00:13:26.577 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:26.577 "is_configured": true, 00:13:26.577 "data_offset": 2048, 00:13:26.577 "data_size": 63488 00:13:26.577 } 00:13:26.577 ] 00:13:26.577 }' 00:13:26.577 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.577 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.147 [2024-11-26 15:29:25.462082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.147 "name": "Existed_Raid", 00:13:27.147 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:27.147 "strip_size_kb": 64, 00:13:27.147 "state": "configuring", 00:13:27.147 "raid_level": "raid5f", 00:13:27.147 "superblock": true, 00:13:27.147 "num_base_bdevs": 3, 00:13:27.147 "num_base_bdevs_discovered": 1, 00:13:27.147 "num_base_bdevs_operational": 3, 00:13:27.147 "base_bdevs_list": [ 00:13:27.147 { 00:13:27.147 "name": null, 00:13:27.147 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:27.147 "is_configured": false, 00:13:27.147 "data_offset": 0, 00:13:27.147 "data_size": 63488 00:13:27.147 }, 00:13:27.147 { 00:13:27.147 "name": null, 00:13:27.147 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:27.147 "is_configured": false, 00:13:27.147 "data_offset": 0, 00:13:27.147 "data_size": 63488 00:13:27.147 }, 00:13:27.147 { 00:13:27.147 "name": "BaseBdev3", 00:13:27.147 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:27.147 "is_configured": true, 00:13:27.147 "data_offset": 2048, 00:13:27.147 "data_size": 63488 00:13:27.147 } 00:13:27.147 ] 00:13:27.147 }' 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.147 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.716 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.717 [2024-11-26 15:29:25.972791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.717 15:29:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.717 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.717 "name": "Existed_Raid", 00:13:27.717 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:27.717 "strip_size_kb": 64, 00:13:27.717 "state": "configuring", 00:13:27.717 "raid_level": "raid5f", 00:13:27.717 "superblock": true, 00:13:27.717 "num_base_bdevs": 3, 00:13:27.717 "num_base_bdevs_discovered": 2, 00:13:27.717 "num_base_bdevs_operational": 3, 00:13:27.717 "base_bdevs_list": [ 00:13:27.717 { 00:13:27.717 "name": null, 00:13:27.717 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:27.717 "is_configured": false, 00:13:27.717 "data_offset": 0, 00:13:27.717 "data_size": 63488 00:13:27.717 }, 00:13:27.717 { 00:13:27.717 "name": "BaseBdev2", 00:13:27.717 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:27.717 "is_configured": true, 00:13:27.717 "data_offset": 2048, 00:13:27.717 "data_size": 63488 00:13:27.717 }, 00:13:27.717 { 00:13:27.717 "name": "BaseBdev3", 00:13:27.717 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:27.717 "is_configured": true, 00:13:27.717 "data_offset": 2048, 00:13:27.717 "data_size": 63488 00:13:27.717 } 00:13:27.717 ] 00:13:27.717 }' 00:13:27.717 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.717 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.977 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3bc6cd06-d027-4a80-8141-0a653587008d 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.237 [2024-11-26 15:29:26.467766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:28.237 [2024-11-26 15:29:26.467928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:28.237 [2024-11-26 15:29:26.467940] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:28.237 [2024-11-26 15:29:26.468198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:28.237 NewBaseBdev 00:13:28.237 [2024-11-26 15:29:26.468631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:28.237 [2024-11-26 15:29:26.468648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:28.237 [2024-11-26 15:29:26.468748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.237 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.237 [ 00:13:28.237 { 00:13:28.237 "name": "NewBaseBdev", 00:13:28.237 "aliases": [ 00:13:28.237 "3bc6cd06-d027-4a80-8141-0a653587008d" 00:13:28.237 ], 00:13:28.237 "product_name": "Malloc disk", 00:13:28.237 "block_size": 512, 00:13:28.237 "num_blocks": 65536, 00:13:28.237 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:28.237 "assigned_rate_limits": { 00:13:28.237 "rw_ios_per_sec": 0, 00:13:28.237 "rw_mbytes_per_sec": 0, 00:13:28.237 "r_mbytes_per_sec": 0, 00:13:28.237 "w_mbytes_per_sec": 0 00:13:28.238 }, 00:13:28.238 "claimed": true, 00:13:28.238 "claim_type": "exclusive_write", 00:13:28.238 "zoned": false, 00:13:28.238 "supported_io_types": { 00:13:28.238 "read": true, 00:13:28.238 "write": true, 00:13:28.238 "unmap": true, 00:13:28.238 "flush": true, 00:13:28.238 "reset": true, 00:13:28.238 "nvme_admin": false, 00:13:28.238 "nvme_io": false, 00:13:28.238 "nvme_io_md": false, 00:13:28.238 "write_zeroes": true, 00:13:28.238 "zcopy": true, 00:13:28.238 "get_zone_info": false, 00:13:28.238 "zone_management": false, 00:13:28.238 "zone_append": false, 00:13:28.238 "compare": false, 00:13:28.238 "compare_and_write": false, 00:13:28.238 "abort": true, 00:13:28.238 "seek_hole": false, 00:13:28.238 "seek_data": false, 00:13:28.238 "copy": true, 00:13:28.238 "nvme_iov_md": false 00:13:28.238 }, 00:13:28.238 "memory_domains": [ 00:13:28.238 { 00:13:28.238 "dma_device_id": "system", 00:13:28.238 "dma_device_type": 1 00:13:28.238 }, 00:13:28.238 { 00:13:28.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.238 "dma_device_type": 2 00:13:28.238 } 00:13:28.238 ], 00:13:28.238 "driver_specific": {} 00:13:28.238 } 00:13:28.238 ] 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.238 "name": "Existed_Raid", 00:13:28.238 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:28.238 "strip_size_kb": 64, 00:13:28.238 "state": "online", 00:13:28.238 "raid_level": "raid5f", 00:13:28.238 "superblock": true, 00:13:28.238 "num_base_bdevs": 3, 00:13:28.238 "num_base_bdevs_discovered": 3, 00:13:28.238 "num_base_bdevs_operational": 3, 00:13:28.238 "base_bdevs_list": [ 00:13:28.238 { 00:13:28.238 "name": "NewBaseBdev", 00:13:28.238 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:28.238 "is_configured": true, 00:13:28.238 "data_offset": 2048, 00:13:28.238 "data_size": 63488 00:13:28.238 }, 00:13:28.238 { 00:13:28.238 "name": "BaseBdev2", 00:13:28.238 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:28.238 "is_configured": true, 00:13:28.238 "data_offset": 2048, 00:13:28.238 "data_size": 63488 00:13:28.238 }, 00:13:28.238 { 00:13:28.238 "name": "BaseBdev3", 00:13:28.238 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:28.238 "is_configured": true, 00:13:28.238 "data_offset": 2048, 00:13:28.238 "data_size": 63488 00:13:28.238 } 00:13:28.238 ] 00:13:28.238 }' 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.238 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.498 [2024-11-26 15:29:26.924134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.498 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:28.498 "name": "Existed_Raid", 00:13:28.498 "aliases": [ 00:13:28.498 "2ed58577-2e5f-472a-b204-4127cff6ebad" 00:13:28.498 ], 00:13:28.498 "product_name": "Raid Volume", 00:13:28.498 "block_size": 512, 00:13:28.498 "num_blocks": 126976, 00:13:28.498 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:28.498 "assigned_rate_limits": { 00:13:28.498 "rw_ios_per_sec": 0, 00:13:28.498 "rw_mbytes_per_sec": 0, 00:13:28.498 "r_mbytes_per_sec": 0, 00:13:28.498 "w_mbytes_per_sec": 0 00:13:28.498 }, 00:13:28.498 "claimed": false, 00:13:28.498 "zoned": false, 00:13:28.498 "supported_io_types": { 00:13:28.498 "read": true, 00:13:28.498 "write": true, 00:13:28.498 "unmap": false, 00:13:28.498 "flush": false, 00:13:28.498 "reset": true, 00:13:28.498 "nvme_admin": false, 00:13:28.498 "nvme_io": false, 00:13:28.498 "nvme_io_md": false, 00:13:28.498 "write_zeroes": true, 00:13:28.498 "zcopy": false, 00:13:28.498 "get_zone_info": false, 00:13:28.498 "zone_management": false, 00:13:28.498 "zone_append": false, 00:13:28.498 "compare": false, 00:13:28.498 "compare_and_write": false, 00:13:28.498 "abort": false, 00:13:28.498 "seek_hole": false, 00:13:28.498 "seek_data": false, 00:13:28.498 "copy": false, 00:13:28.498 "nvme_iov_md": false 00:13:28.498 }, 00:13:28.498 "driver_specific": { 00:13:28.498 "raid": { 00:13:28.498 "uuid": "2ed58577-2e5f-472a-b204-4127cff6ebad", 00:13:28.498 "strip_size_kb": 64, 00:13:28.498 "state": "online", 00:13:28.498 "raid_level": "raid5f", 00:13:28.498 "superblock": true, 00:13:28.498 "num_base_bdevs": 3, 00:13:28.498 "num_base_bdevs_discovered": 3, 00:13:28.498 "num_base_bdevs_operational": 3, 00:13:28.498 "base_bdevs_list": [ 00:13:28.498 { 00:13:28.498 "name": "NewBaseBdev", 00:13:28.499 "uuid": "3bc6cd06-d027-4a80-8141-0a653587008d", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": "BaseBdev2", 00:13:28.499 "uuid": "0848cbb2-ebd9-4730-985b-a059e6ecc575", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 }, 00:13:28.499 { 00:13:28.499 "name": "BaseBdev3", 00:13:28.499 "uuid": "31536fe9-13e8-440b-992b-c4255a777385", 00:13:28.499 "is_configured": true, 00:13:28.499 "data_offset": 2048, 00:13:28.499 "data_size": 63488 00:13:28.499 } 00:13:28.499 ] 00:13:28.499 } 00:13:28.499 } 00:13:28.499 }' 00:13:28.499 15:29:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:28.759 BaseBdev2 00:13:28.759 BaseBdev3' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.759 [2024-11-26 15:29:27.167992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.759 [2024-11-26 15:29:27.168018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.759 [2024-11-26 15:29:27.168078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.759 [2024-11-26 15:29:27.168334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.759 [2024-11-26 15:29:27.168346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 92566 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92566 ']' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 92566 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92566 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92566' 00:13:28.759 killing process with pid 92566 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 92566 00:13:28.759 [2024-11-26 15:29:27.211203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.759 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 92566 00:13:29.019 [2024-11-26 15:29:27.241538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.019 15:29:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:29.019 00:13:29.019 real 0m8.702s 00:13:29.019 user 0m14.790s 00:13:29.019 sys 0m1.837s 00:13:29.019 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.019 15:29:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.019 ************************************ 00:13:29.019 END TEST raid5f_state_function_test_sb 00:13:29.019 ************************************ 00:13:29.279 15:29:27 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:29.279 15:29:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:29.279 15:29:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.279 15:29:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.279 ************************************ 00:13:29.279 START TEST raid5f_superblock_test 00:13:29.279 ************************************ 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93170 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93170 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 93170 ']' 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.279 15:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.279 [2024-11-26 15:29:27.620786] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:29.279 [2024-11-26 15:29:27.621040] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93170 ] 00:13:29.539 [2024-11-26 15:29:27.756923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:29.539 [2024-11-26 15:29:27.794459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.539 [2024-11-26 15:29:27.819500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.539 [2024-11-26 15:29:27.861076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.539 [2024-11-26 15:29:27.861210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.110 malloc1 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.110 [2024-11-26 15:29:28.459919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:30.110 [2024-11-26 15:29:28.460041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.110 [2024-11-26 15:29:28.460070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:30.110 [2024-11-26 15:29:28.460087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.110 [2024-11-26 15:29:28.462196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.110 [2024-11-26 15:29:28.462240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:30.110 pt1 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.110 malloc2 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.110 [2024-11-26 15:29:28.488508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:30.110 [2024-11-26 15:29:28.488602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.110 [2024-11-26 15:29:28.488660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:30.110 [2024-11-26 15:29:28.488688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.110 [2024-11-26 15:29:28.490652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.110 [2024-11-26 15:29:28.490717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:30.110 pt2 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.110 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.111 malloc3 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.111 [2024-11-26 15:29:28.520840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:30.111 [2024-11-26 15:29:28.520930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.111 [2024-11-26 15:29:28.520964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:30.111 [2024-11-26 15:29:28.520998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.111 [2024-11-26 15:29:28.522962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.111 [2024-11-26 15:29:28.523027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:30.111 pt3 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.111 [2024-11-26 15:29:28.532875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:30.111 [2024-11-26 15:29:28.534650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:30.111 [2024-11-26 15:29:28.534742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:30.111 [2024-11-26 15:29:28.534918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:30.111 [2024-11-26 15:29:28.534964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:30.111 [2024-11-26 15:29:28.535234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:30.111 [2024-11-26 15:29:28.535674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:30.111 [2024-11-26 15:29:28.535719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:30.111 [2024-11-26 15:29:28.535867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.111 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.372 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.372 "name": "raid_bdev1", 00:13:30.372 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:30.372 "strip_size_kb": 64, 00:13:30.372 "state": "online", 00:13:30.372 "raid_level": "raid5f", 00:13:30.372 "superblock": true, 00:13:30.372 "num_base_bdevs": 3, 00:13:30.372 "num_base_bdevs_discovered": 3, 00:13:30.372 "num_base_bdevs_operational": 3, 00:13:30.372 "base_bdevs_list": [ 00:13:30.372 { 00:13:30.372 "name": "pt1", 00:13:30.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:30.372 "is_configured": true, 00:13:30.372 "data_offset": 2048, 00:13:30.372 "data_size": 63488 00:13:30.372 }, 00:13:30.372 { 00:13:30.372 "name": "pt2", 00:13:30.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.372 "is_configured": true, 00:13:30.372 "data_offset": 2048, 00:13:30.372 "data_size": 63488 00:13:30.372 }, 00:13:30.372 { 00:13:30.372 "name": "pt3", 00:13:30.372 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:30.372 "is_configured": true, 00:13:30.372 "data_offset": 2048, 00:13:30.372 "data_size": 63488 00:13:30.372 } 00:13:30.372 ] 00:13:30.372 }' 00:13:30.372 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.372 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.632 [2024-11-26 15:29:28.945483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.632 "name": "raid_bdev1", 00:13:30.632 "aliases": [ 00:13:30.632 "22f58cd9-8372-4083-8d0f-8a141d6951c8" 00:13:30.632 ], 00:13:30.632 "product_name": "Raid Volume", 00:13:30.632 "block_size": 512, 00:13:30.632 "num_blocks": 126976, 00:13:30.632 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:30.632 "assigned_rate_limits": { 00:13:30.632 "rw_ios_per_sec": 0, 00:13:30.632 "rw_mbytes_per_sec": 0, 00:13:30.632 "r_mbytes_per_sec": 0, 00:13:30.632 "w_mbytes_per_sec": 0 00:13:30.632 }, 00:13:30.632 "claimed": false, 00:13:30.632 "zoned": false, 00:13:30.632 "supported_io_types": { 00:13:30.632 "read": true, 00:13:30.632 "write": true, 00:13:30.632 "unmap": false, 00:13:30.632 "flush": false, 00:13:30.632 "reset": true, 00:13:30.632 "nvme_admin": false, 00:13:30.632 "nvme_io": false, 00:13:30.632 "nvme_io_md": false, 00:13:30.632 "write_zeroes": true, 00:13:30.632 "zcopy": false, 00:13:30.632 "get_zone_info": false, 00:13:30.632 "zone_management": false, 00:13:30.632 "zone_append": false, 00:13:30.632 "compare": false, 00:13:30.632 "compare_and_write": false, 00:13:30.632 "abort": false, 00:13:30.632 "seek_hole": false, 00:13:30.632 "seek_data": false, 00:13:30.632 "copy": false, 00:13:30.632 "nvme_iov_md": false 00:13:30.632 }, 00:13:30.632 "driver_specific": { 00:13:30.632 "raid": { 00:13:30.632 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:30.632 "strip_size_kb": 64, 00:13:30.632 "state": "online", 00:13:30.632 "raid_level": "raid5f", 00:13:30.632 "superblock": true, 00:13:30.632 "num_base_bdevs": 3, 00:13:30.632 "num_base_bdevs_discovered": 3, 00:13:30.632 "num_base_bdevs_operational": 3, 00:13:30.632 "base_bdevs_list": [ 00:13:30.632 { 00:13:30.632 "name": "pt1", 00:13:30.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:30.632 "is_configured": true, 00:13:30.632 "data_offset": 2048, 00:13:30.632 "data_size": 63488 00:13:30.632 }, 00:13:30.632 { 00:13:30.632 "name": "pt2", 00:13:30.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:30.632 "is_configured": true, 00:13:30.632 "data_offset": 2048, 00:13:30.632 "data_size": 63488 00:13:30.632 }, 00:13:30.632 { 00:13:30.632 "name": "pt3", 00:13:30.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:30.632 "is_configured": true, 00:13:30.632 "data_offset": 2048, 00:13:30.632 "data_size": 63488 00:13:30.632 } 00:13:30.632 ] 00:13:30.632 } 00:13:30.632 } 00:13:30.632 }' 00:13:30.632 15:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:30.632 pt2 00:13:30.632 pt3' 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.632 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.890 [2024-11-26 15:29:29.205553] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=22f58cd9-8372-4083-8d0f-8a141d6951c8 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 22f58cd9-8372-4083-8d0f-8a141d6951c8 ']' 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.890 [2024-11-26 15:29:29.253382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:30.890 [2024-11-26 15:29:29.253442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.890 [2024-11-26 15:29:29.253548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.890 [2024-11-26 15:29:29.253666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.890 [2024-11-26 15:29:29.253709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:30.890 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:30.891 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 [2024-11-26 15:29:29.401469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:31.150 [2024-11-26 15:29:29.403332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:31.150 [2024-11-26 15:29:29.403423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:31.150 [2024-11-26 15:29:29.403486] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:31.150 [2024-11-26 15:29:29.403600] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:31.150 [2024-11-26 15:29:29.403643] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:31.150 [2024-11-26 15:29:29.403658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.150 [2024-11-26 15:29:29.403667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:13:31.150 request: 00:13:31.150 { 00:13:31.150 "name": "raid_bdev1", 00:13:31.150 "raid_level": "raid5f", 00:13:31.150 "base_bdevs": [ 00:13:31.150 "malloc1", 00:13:31.150 "malloc2", 00:13:31.150 "malloc3" 00:13:31.150 ], 00:13:31.150 "strip_size_kb": 64, 00:13:31.150 "superblock": false, 00:13:31.150 "method": "bdev_raid_create", 00:13:31.150 "req_id": 1 00:13:31.150 } 00:13:31.150 Got JSON-RPC error response 00:13:31.150 response: 00:13:31.150 { 00:13:31.150 "code": -17, 00:13:31.150 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:31.150 } 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 [2024-11-26 15:29:29.465446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:31.150 [2024-11-26 15:29:29.465541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.150 [2024-11-26 15:29:29.465574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:31.150 [2024-11-26 15:29:29.465599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.150 [2024-11-26 15:29:29.467640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.150 [2024-11-26 15:29:29.467702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:31.150 [2024-11-26 15:29:29.467801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:31.150 [2024-11-26 15:29:29.467861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:31.150 pt1 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.150 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.150 "name": "raid_bdev1", 00:13:31.150 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:31.150 "strip_size_kb": 64, 00:13:31.150 "state": "configuring", 00:13:31.150 "raid_level": "raid5f", 00:13:31.150 "superblock": true, 00:13:31.150 "num_base_bdevs": 3, 00:13:31.150 "num_base_bdevs_discovered": 1, 00:13:31.150 "num_base_bdevs_operational": 3, 00:13:31.150 "base_bdevs_list": [ 00:13:31.150 { 00:13:31.150 "name": "pt1", 00:13:31.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:31.150 "is_configured": true, 00:13:31.150 "data_offset": 2048, 00:13:31.150 "data_size": 63488 00:13:31.150 }, 00:13:31.150 { 00:13:31.150 "name": null, 00:13:31.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.151 "is_configured": false, 00:13:31.151 "data_offset": 2048, 00:13:31.151 "data_size": 63488 00:13:31.151 }, 00:13:31.151 { 00:13:31.151 "name": null, 00:13:31.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:31.151 "is_configured": false, 00:13:31.151 "data_offset": 2048, 00:13:31.151 "data_size": 63488 00:13:31.151 } 00:13:31.151 ] 00:13:31.151 }' 00:13:31.151 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.151 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.719 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:31.719 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:31.719 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.719 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.719 [2024-11-26 15:29:29.901620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:31.719 [2024-11-26 15:29:29.901693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.719 [2024-11-26 15:29:29.901724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:31.719 [2024-11-26 15:29:29.901742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.719 [2024-11-26 15:29:29.902275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.719 [2024-11-26 15:29:29.902292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:31.719 [2024-11-26 15:29:29.902396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:31.719 [2024-11-26 15:29:29.902416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:31.719 pt2 00:13:31.719 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.719 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.720 [2024-11-26 15:29:29.913642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.720 "name": "raid_bdev1", 00:13:31.720 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:31.720 "strip_size_kb": 64, 00:13:31.720 "state": "configuring", 00:13:31.720 "raid_level": "raid5f", 00:13:31.720 "superblock": true, 00:13:31.720 "num_base_bdevs": 3, 00:13:31.720 "num_base_bdevs_discovered": 1, 00:13:31.720 "num_base_bdevs_operational": 3, 00:13:31.720 "base_bdevs_list": [ 00:13:31.720 { 00:13:31.720 "name": "pt1", 00:13:31.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:31.720 "is_configured": true, 00:13:31.720 "data_offset": 2048, 00:13:31.720 "data_size": 63488 00:13:31.720 }, 00:13:31.720 { 00:13:31.720 "name": null, 00:13:31.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.720 "is_configured": false, 00:13:31.720 "data_offset": 0, 00:13:31.720 "data_size": 63488 00:13:31.720 }, 00:13:31.720 { 00:13:31.720 "name": null, 00:13:31.720 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:31.720 "is_configured": false, 00:13:31.720 "data_offset": 2048, 00:13:31.720 "data_size": 63488 00:13:31.720 } 00:13:31.720 ] 00:13:31.720 }' 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.720 15:29:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.980 [2024-11-26 15:29:30.337729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:31.980 [2024-11-26 15:29:30.337847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.980 [2024-11-26 15:29:30.337888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:31.980 [2024-11-26 15:29:30.337923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.980 [2024-11-26 15:29:30.338329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.980 [2024-11-26 15:29:30.338389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:31.980 [2024-11-26 15:29:30.338488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:31.980 [2024-11-26 15:29:30.338543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:31.980 pt2 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.980 [2024-11-26 15:29:30.349696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:31.980 [2024-11-26 15:29:30.349781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.980 [2024-11-26 15:29:30.349810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:31.980 [2024-11-26 15:29:30.349838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.980 [2024-11-26 15:29:30.350167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.980 [2024-11-26 15:29:30.350248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:31.980 [2024-11-26 15:29:30.350326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:31.980 [2024-11-26 15:29:30.350374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:31.980 [2024-11-26 15:29:30.350520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:31.980 [2024-11-26 15:29:30.350569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:31.980 [2024-11-26 15:29:30.350812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:31.980 [2024-11-26 15:29:30.351290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:31.980 [2024-11-26 15:29:30.351340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:31.980 [2024-11-26 15:29:30.351482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.980 pt3 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.980 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.981 "name": "raid_bdev1", 00:13:31.981 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:31.981 "strip_size_kb": 64, 00:13:31.981 "state": "online", 00:13:31.981 "raid_level": "raid5f", 00:13:31.981 "superblock": true, 00:13:31.981 "num_base_bdevs": 3, 00:13:31.981 "num_base_bdevs_discovered": 3, 00:13:31.981 "num_base_bdevs_operational": 3, 00:13:31.981 "base_bdevs_list": [ 00:13:31.981 { 00:13:31.981 "name": "pt1", 00:13:31.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:31.981 "is_configured": true, 00:13:31.981 "data_offset": 2048, 00:13:31.981 "data_size": 63488 00:13:31.981 }, 00:13:31.981 { 00:13:31.981 "name": "pt2", 00:13:31.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:31.981 "is_configured": true, 00:13:31.981 "data_offset": 2048, 00:13:31.981 "data_size": 63488 00:13:31.981 }, 00:13:31.981 { 00:13:31.981 "name": "pt3", 00:13:31.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:31.981 "is_configured": true, 00:13:31.981 "data_offset": 2048, 00:13:31.981 "data_size": 63488 00:13:31.981 } 00:13:31.981 ] 00:13:31.981 }' 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.981 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.551 [2024-11-26 15:29:30.801993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.551 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:32.551 "name": "raid_bdev1", 00:13:32.551 "aliases": [ 00:13:32.551 "22f58cd9-8372-4083-8d0f-8a141d6951c8" 00:13:32.551 ], 00:13:32.551 "product_name": "Raid Volume", 00:13:32.551 "block_size": 512, 00:13:32.552 "num_blocks": 126976, 00:13:32.552 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:32.552 "assigned_rate_limits": { 00:13:32.552 "rw_ios_per_sec": 0, 00:13:32.552 "rw_mbytes_per_sec": 0, 00:13:32.552 "r_mbytes_per_sec": 0, 00:13:32.552 "w_mbytes_per_sec": 0 00:13:32.552 }, 00:13:32.552 "claimed": false, 00:13:32.552 "zoned": false, 00:13:32.552 "supported_io_types": { 00:13:32.552 "read": true, 00:13:32.552 "write": true, 00:13:32.552 "unmap": false, 00:13:32.552 "flush": false, 00:13:32.552 "reset": true, 00:13:32.552 "nvme_admin": false, 00:13:32.552 "nvme_io": false, 00:13:32.552 "nvme_io_md": false, 00:13:32.552 "write_zeroes": true, 00:13:32.552 "zcopy": false, 00:13:32.552 "get_zone_info": false, 00:13:32.552 "zone_management": false, 00:13:32.552 "zone_append": false, 00:13:32.552 "compare": false, 00:13:32.552 "compare_and_write": false, 00:13:32.552 "abort": false, 00:13:32.552 "seek_hole": false, 00:13:32.552 "seek_data": false, 00:13:32.552 "copy": false, 00:13:32.552 "nvme_iov_md": false 00:13:32.552 }, 00:13:32.552 "driver_specific": { 00:13:32.552 "raid": { 00:13:32.552 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:32.552 "strip_size_kb": 64, 00:13:32.552 "state": "online", 00:13:32.552 "raid_level": "raid5f", 00:13:32.552 "superblock": true, 00:13:32.552 "num_base_bdevs": 3, 00:13:32.552 "num_base_bdevs_discovered": 3, 00:13:32.552 "num_base_bdevs_operational": 3, 00:13:32.552 "base_bdevs_list": [ 00:13:32.552 { 00:13:32.552 "name": "pt1", 00:13:32.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:32.552 "is_configured": true, 00:13:32.552 "data_offset": 2048, 00:13:32.552 "data_size": 63488 00:13:32.552 }, 00:13:32.552 { 00:13:32.552 "name": "pt2", 00:13:32.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.552 "is_configured": true, 00:13:32.552 "data_offset": 2048, 00:13:32.552 "data_size": 63488 00:13:32.552 }, 00:13:32.552 { 00:13:32.552 "name": "pt3", 00:13:32.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:32.552 "is_configured": true, 00:13:32.552 "data_offset": 2048, 00:13:32.552 "data_size": 63488 00:13:32.552 } 00:13:32.552 ] 00:13:32.552 } 00:13:32.552 } 00:13:32.552 }' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:32.552 pt2 00:13:32.552 pt3' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.552 15:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.552 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 [2024-11-26 15:29:31.046069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 22f58cd9-8372-4083-8d0f-8a141d6951c8 '!=' 22f58cd9-8372-4083-8d0f-8a141d6951c8 ']' 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 [2024-11-26 15:29:31.069938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.813 "name": "raid_bdev1", 00:13:32.813 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:32.813 "strip_size_kb": 64, 00:13:32.813 "state": "online", 00:13:32.813 "raid_level": "raid5f", 00:13:32.813 "superblock": true, 00:13:32.813 "num_base_bdevs": 3, 00:13:32.813 "num_base_bdevs_discovered": 2, 00:13:32.813 "num_base_bdevs_operational": 2, 00:13:32.813 "base_bdevs_list": [ 00:13:32.813 { 00:13:32.813 "name": null, 00:13:32.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.813 "is_configured": false, 00:13:32.813 "data_offset": 0, 00:13:32.813 "data_size": 63488 00:13:32.813 }, 00:13:32.813 { 00:13:32.813 "name": "pt2", 00:13:32.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.813 "is_configured": true, 00:13:32.813 "data_offset": 2048, 00:13:32.813 "data_size": 63488 00:13:32.813 }, 00:13:32.813 { 00:13:32.813 "name": "pt3", 00:13:32.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:32.813 "is_configured": true, 00:13:32.813 "data_offset": 2048, 00:13:32.813 "data_size": 63488 00:13:32.813 } 00:13:32.813 ] 00:13:32.813 }' 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.813 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.073 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.073 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.073 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.073 [2024-11-26 15:29:31.546068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.073 [2024-11-26 15:29:31.546138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.073 [2024-11-26 15:29:31.546267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.073 [2024-11-26 15:29:31.546339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.073 [2024-11-26 15:29:31.546391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.333 [2024-11-26 15:29:31.634087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:33.333 [2024-11-26 15:29:31.634139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.333 [2024-11-26 15:29:31.634155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:33.333 [2024-11-26 15:29:31.634164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.333 [2024-11-26 15:29:31.636351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.333 [2024-11-26 15:29:31.636430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:33.333 [2024-11-26 15:29:31.636503] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:33.333 [2024-11-26 15:29:31.636538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.333 pt2 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.333 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.334 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.334 "name": "raid_bdev1", 00:13:33.334 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:33.334 "strip_size_kb": 64, 00:13:33.334 "state": "configuring", 00:13:33.334 "raid_level": "raid5f", 00:13:33.334 "superblock": true, 00:13:33.334 "num_base_bdevs": 3, 00:13:33.334 "num_base_bdevs_discovered": 1, 00:13:33.334 "num_base_bdevs_operational": 2, 00:13:33.334 "base_bdevs_list": [ 00:13:33.334 { 00:13:33.334 "name": null, 00:13:33.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.334 "is_configured": false, 00:13:33.334 "data_offset": 2048, 00:13:33.334 "data_size": 63488 00:13:33.334 }, 00:13:33.334 { 00:13:33.334 "name": "pt2", 00:13:33.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.334 "is_configured": true, 00:13:33.334 "data_offset": 2048, 00:13:33.334 "data_size": 63488 00:13:33.334 }, 00:13:33.334 { 00:13:33.334 "name": null, 00:13:33.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.334 "is_configured": false, 00:13:33.334 "data_offset": 2048, 00:13:33.334 "data_size": 63488 00:13:33.334 } 00:13:33.334 ] 00:13:33.334 }' 00:13:33.334 15:29:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.334 15:29:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.594 [2024-11-26 15:29:32.054208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:33.594 [2024-11-26 15:29:32.054309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.594 [2024-11-26 15:29:32.054341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:33.594 [2024-11-26 15:29:32.054372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.594 [2024-11-26 15:29:32.054735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.594 [2024-11-26 15:29:32.054795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:33.594 [2024-11-26 15:29:32.054888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:33.594 [2024-11-26 15:29:32.054946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:33.594 [2024-11-26 15:29:32.055064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:33.594 [2024-11-26 15:29:32.055101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:33.594 [2024-11-26 15:29:32.055345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:33.594 [2024-11-26 15:29:32.055798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:33.594 [2024-11-26 15:29:32.055842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:33.594 [2024-11-26 15:29:32.056078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.594 pt3 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.594 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.854 "name": "raid_bdev1", 00:13:33.854 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:33.854 "strip_size_kb": 64, 00:13:33.854 "state": "online", 00:13:33.854 "raid_level": "raid5f", 00:13:33.854 "superblock": true, 00:13:33.854 "num_base_bdevs": 3, 00:13:33.854 "num_base_bdevs_discovered": 2, 00:13:33.854 "num_base_bdevs_operational": 2, 00:13:33.854 "base_bdevs_list": [ 00:13:33.854 { 00:13:33.854 "name": null, 00:13:33.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.854 "is_configured": false, 00:13:33.854 "data_offset": 2048, 00:13:33.854 "data_size": 63488 00:13:33.854 }, 00:13:33.854 { 00:13:33.854 "name": "pt2", 00:13:33.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.854 "is_configured": true, 00:13:33.854 "data_offset": 2048, 00:13:33.854 "data_size": 63488 00:13:33.854 }, 00:13:33.854 { 00:13:33.854 "name": "pt3", 00:13:33.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.854 "is_configured": true, 00:13:33.854 "data_offset": 2048, 00:13:33.854 "data_size": 63488 00:13:33.854 } 00:13:33.854 ] 00:13:33.854 }' 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.854 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.114 [2024-11-26 15:29:32.466332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.114 [2024-11-26 15:29:32.466413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.114 [2024-11-26 15:29:32.466508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.114 [2024-11-26 15:29:32.466570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.114 [2024-11-26 15:29:32.466579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.114 [2024-11-26 15:29:32.542327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.114 [2024-11-26 15:29:32.542376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.114 [2024-11-26 15:29:32.542410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:34.114 [2024-11-26 15:29:32.542418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.114 [2024-11-26 15:29:32.544574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.114 [2024-11-26 15:29:32.544684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.114 [2024-11-26 15:29:32.544764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:34.114 [2024-11-26 15:29:32.544804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.114 [2024-11-26 15:29:32.544928] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:34.114 [2024-11-26 15:29:32.544939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.114 [2024-11-26 15:29:32.544974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:13:34.114 [2024-11-26 15:29:32.545011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.114 pt1 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.114 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.115 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.375 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.375 "name": "raid_bdev1", 00:13:34.375 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:34.375 "strip_size_kb": 64, 00:13:34.375 "state": "configuring", 00:13:34.375 "raid_level": "raid5f", 00:13:34.375 "superblock": true, 00:13:34.375 "num_base_bdevs": 3, 00:13:34.375 "num_base_bdevs_discovered": 1, 00:13:34.375 "num_base_bdevs_operational": 2, 00:13:34.375 "base_bdevs_list": [ 00:13:34.375 { 00:13:34.375 "name": null, 00:13:34.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.375 "is_configured": false, 00:13:34.375 "data_offset": 2048, 00:13:34.375 "data_size": 63488 00:13:34.375 }, 00:13:34.375 { 00:13:34.375 "name": "pt2", 00:13:34.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.375 "is_configured": true, 00:13:34.375 "data_offset": 2048, 00:13:34.375 "data_size": 63488 00:13:34.375 }, 00:13:34.375 { 00:13:34.375 "name": null, 00:13:34.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.375 "is_configured": false, 00:13:34.375 "data_offset": 2048, 00:13:34.375 "data_size": 63488 00:13:34.375 } 00:13:34.375 ] 00:13:34.375 }' 00:13:34.375 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.375 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.635 15:29:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.635 [2024-11-26 15:29:33.006493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:34.635 [2024-11-26 15:29:33.006588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.635 [2024-11-26 15:29:33.006625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:34.635 [2024-11-26 15:29:33.006653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.635 [2024-11-26 15:29:33.007061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.635 [2024-11-26 15:29:33.007118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:34.636 [2024-11-26 15:29:33.007223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:34.636 [2024-11-26 15:29:33.007273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:34.636 [2024-11-26 15:29:33.007385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:34.636 [2024-11-26 15:29:33.007420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:34.636 [2024-11-26 15:29:33.007674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:13:34.636 [2024-11-26 15:29:33.008156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:34.636 [2024-11-26 15:29:33.008229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:34.636 [2024-11-26 15:29:33.008431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.636 pt3 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.636 "name": "raid_bdev1", 00:13:34.636 "uuid": "22f58cd9-8372-4083-8d0f-8a141d6951c8", 00:13:34.636 "strip_size_kb": 64, 00:13:34.636 "state": "online", 00:13:34.636 "raid_level": "raid5f", 00:13:34.636 "superblock": true, 00:13:34.636 "num_base_bdevs": 3, 00:13:34.636 "num_base_bdevs_discovered": 2, 00:13:34.636 "num_base_bdevs_operational": 2, 00:13:34.636 "base_bdevs_list": [ 00:13:34.636 { 00:13:34.636 "name": null, 00:13:34.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.636 "is_configured": false, 00:13:34.636 "data_offset": 2048, 00:13:34.636 "data_size": 63488 00:13:34.636 }, 00:13:34.636 { 00:13:34.636 "name": "pt2", 00:13:34.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.636 "is_configured": true, 00:13:34.636 "data_offset": 2048, 00:13:34.636 "data_size": 63488 00:13:34.636 }, 00:13:34.636 { 00:13:34.636 "name": "pt3", 00:13:34.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.636 "is_configured": true, 00:13:34.636 "data_offset": 2048, 00:13:34.636 "data_size": 63488 00:13:34.636 } 00:13:34.636 ] 00:13:34.636 }' 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.636 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.206 [2024-11-26 15:29:33.502838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 22f58cd9-8372-4083-8d0f-8a141d6951c8 '!=' 22f58cd9-8372-4083-8d0f-8a141d6951c8 ']' 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93170 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 93170 ']' 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 93170 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93170 00:13:35.206 killing process with pid 93170 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93170' 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 93170 00:13:35.206 [2024-11-26 15:29:33.560667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.206 [2024-11-26 15:29:33.560800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.206 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 93170 00:13:35.206 [2024-11-26 15:29:33.560888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.206 [2024-11-26 15:29:33.560906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:35.206 [2024-11-26 15:29:33.593536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.466 15:29:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:35.466 ************************************ 00:13:35.466 END TEST raid5f_superblock_test 00:13:35.466 ************************************ 00:13:35.466 00:13:35.466 real 0m6.280s 00:13:35.466 user 0m10.479s 00:13:35.466 sys 0m1.354s 00:13:35.466 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.466 15:29:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.466 15:29:33 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:35.466 15:29:33 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:35.466 15:29:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:35.466 15:29:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.466 15:29:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.466 ************************************ 00:13:35.466 START TEST raid5f_rebuild_test 00:13:35.466 ************************************ 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:35.466 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=93597 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 93597 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 93597 ']' 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.467 15:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.727 [2024-11-26 15:29:33.986349] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:35.727 [2024-11-26 15:29:33.986875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93597 ] 00:13:35.727 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:35.727 Zero copy mechanism will not be used. 00:13:35.727 [2024-11-26 15:29:34.121605] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:35.727 [2024-11-26 15:29:34.158680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.727 [2024-11-26 15:29:34.183396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.987 [2024-11-26 15:29:34.226245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.987 [2024-11-26 15:29:34.226276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 BaseBdev1_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 [2024-11-26 15:29:34.821243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:36.557 [2024-11-26 15:29:34.821317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.557 [2024-11-26 15:29:34.821350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:36.557 [2024-11-26 15:29:34.821364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.557 [2024-11-26 15:29:34.823439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.557 [2024-11-26 15:29:34.823550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:36.557 BaseBdev1 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 BaseBdev2_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 [2024-11-26 15:29:34.849636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:36.557 [2024-11-26 15:29:34.849689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.557 [2024-11-26 15:29:34.849706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:36.557 [2024-11-26 15:29:34.849716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.557 [2024-11-26 15:29:34.851813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.557 [2024-11-26 15:29:34.851887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:36.557 BaseBdev2 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 BaseBdev3_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 [2024-11-26 15:29:34.878135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:36.557 [2024-11-26 15:29:34.878194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.557 [2024-11-26 15:29:34.878213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:36.557 [2024-11-26 15:29:34.878223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.557 [2024-11-26 15:29:34.880205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.557 [2024-11-26 15:29:34.880242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:36.557 BaseBdev3 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 spare_malloc 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 spare_delay 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.557 [2024-11-26 15:29:34.936382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:36.557 [2024-11-26 15:29:34.936519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.557 [2024-11-26 15:29:34.936551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:36.557 [2024-11-26 15:29:34.936567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.557 [2024-11-26 15:29:34.939597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.557 [2024-11-26 15:29:34.939679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:36.557 spare 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.557 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.558 [2024-11-26 15:29:34.948432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.558 [2024-11-26 15:29:34.950301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.558 [2024-11-26 15:29:34.950352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.558 [2024-11-26 15:29:34.950421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:36.558 [2024-11-26 15:29:34.950430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:36.558 [2024-11-26 15:29:34.950706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:36.558 [2024-11-26 15:29:34.951116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:36.558 [2024-11-26 15:29:34.951129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:36.558 [2024-11-26 15:29:34.951249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.558 15:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.558 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.558 "name": "raid_bdev1", 00:13:36.558 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:36.558 "strip_size_kb": 64, 00:13:36.558 "state": "online", 00:13:36.558 "raid_level": "raid5f", 00:13:36.558 "superblock": false, 00:13:36.558 "num_base_bdevs": 3, 00:13:36.558 "num_base_bdevs_discovered": 3, 00:13:36.558 "num_base_bdevs_operational": 3, 00:13:36.558 "base_bdevs_list": [ 00:13:36.558 { 00:13:36.558 "name": "BaseBdev1", 00:13:36.558 "uuid": "6d6d194a-4776-53af-9aa3-13f3d11df2ca", 00:13:36.558 "is_configured": true, 00:13:36.558 "data_offset": 0, 00:13:36.558 "data_size": 65536 00:13:36.558 }, 00:13:36.558 { 00:13:36.558 "name": "BaseBdev2", 00:13:36.558 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:36.558 "is_configured": true, 00:13:36.558 "data_offset": 0, 00:13:36.558 "data_size": 65536 00:13:36.558 }, 00:13:36.558 { 00:13:36.558 "name": "BaseBdev3", 00:13:36.558 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:36.558 "is_configured": true, 00:13:36.558 "data_offset": 0, 00:13:36.558 "data_size": 65536 00:13:36.558 } 00:13:36.558 ] 00:13:36.558 }' 00:13:36.558 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.558 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.127 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.127 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.127 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:37.127 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.128 [2024-11-26 15:29:35.384923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.128 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:37.387 [2024-11-26 15:29:35.644892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:13:37.387 /dev/nbd0 00:13:37.387 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.387 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.388 1+0 records in 00:13:37.388 1+0 records out 00:13:37.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677599 s, 6.0 MB/s 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:37.388 15:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:37.648 512+0 records in 00:13:37.648 512+0 records out 00:13:37.648 67108864 bytes (67 MB, 64 MiB) copied, 0.289405 s, 232 MB/s 00:13:37.648 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:37.648 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.648 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.648 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.648 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:37.648 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.648 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.908 [2024-11-26 15:29:36.226922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.908 [2024-11-26 15:29:36.243021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.908 "name": "raid_bdev1", 00:13:37.908 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:37.908 "strip_size_kb": 64, 00:13:37.908 "state": "online", 00:13:37.908 "raid_level": "raid5f", 00:13:37.908 "superblock": false, 00:13:37.908 "num_base_bdevs": 3, 00:13:37.908 "num_base_bdevs_discovered": 2, 00:13:37.908 "num_base_bdevs_operational": 2, 00:13:37.908 "base_bdevs_list": [ 00:13:37.908 { 00:13:37.908 "name": null, 00:13:37.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.908 "is_configured": false, 00:13:37.908 "data_offset": 0, 00:13:37.908 "data_size": 65536 00:13:37.908 }, 00:13:37.908 { 00:13:37.908 "name": "BaseBdev2", 00:13:37.908 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:37.908 "is_configured": true, 00:13:37.908 "data_offset": 0, 00:13:37.908 "data_size": 65536 00:13:37.908 }, 00:13:37.908 { 00:13:37.908 "name": "BaseBdev3", 00:13:37.908 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:37.908 "is_configured": true, 00:13:37.908 "data_offset": 0, 00:13:37.908 "data_size": 65536 00:13:37.908 } 00:13:37.908 ] 00:13:37.908 }' 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.908 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.479 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.479 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.479 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.479 [2024-11-26 15:29:36.711145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.479 [2024-11-26 15:29:36.715705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ba90 00:13:38.479 15:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.479 15:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:38.479 [2024-11-26 15:29:36.717930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.420 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.421 "name": "raid_bdev1", 00:13:39.421 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:39.421 "strip_size_kb": 64, 00:13:39.421 "state": "online", 00:13:39.421 "raid_level": "raid5f", 00:13:39.421 "superblock": false, 00:13:39.421 "num_base_bdevs": 3, 00:13:39.421 "num_base_bdevs_discovered": 3, 00:13:39.421 "num_base_bdevs_operational": 3, 00:13:39.421 "process": { 00:13:39.421 "type": "rebuild", 00:13:39.421 "target": "spare", 00:13:39.421 "progress": { 00:13:39.421 "blocks": 20480, 00:13:39.421 "percent": 15 00:13:39.421 } 00:13:39.421 }, 00:13:39.421 "base_bdevs_list": [ 00:13:39.421 { 00:13:39.421 "name": "spare", 00:13:39.421 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:39.421 "is_configured": true, 00:13:39.421 "data_offset": 0, 00:13:39.421 "data_size": 65536 00:13:39.421 }, 00:13:39.421 { 00:13:39.421 "name": "BaseBdev2", 00:13:39.421 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:39.421 "is_configured": true, 00:13:39.421 "data_offset": 0, 00:13:39.421 "data_size": 65536 00:13:39.421 }, 00:13:39.421 { 00:13:39.421 "name": "BaseBdev3", 00:13:39.421 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:39.421 "is_configured": true, 00:13:39.421 "data_offset": 0, 00:13:39.421 "data_size": 65536 00:13:39.421 } 00:13:39.421 ] 00:13:39.421 }' 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.421 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.421 [2024-11-26 15:29:37.864761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.681 [2024-11-26 15:29:37.926955] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.681 [2024-11-26 15:29:37.927013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.681 [2024-11-26 15:29:37.927031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.681 [2024-11-26 15:29:37.927038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.681 "name": "raid_bdev1", 00:13:39.681 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:39.681 "strip_size_kb": 64, 00:13:39.681 "state": "online", 00:13:39.681 "raid_level": "raid5f", 00:13:39.681 "superblock": false, 00:13:39.681 "num_base_bdevs": 3, 00:13:39.681 "num_base_bdevs_discovered": 2, 00:13:39.681 "num_base_bdevs_operational": 2, 00:13:39.681 "base_bdevs_list": [ 00:13:39.681 { 00:13:39.681 "name": null, 00:13:39.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.681 "is_configured": false, 00:13:39.681 "data_offset": 0, 00:13:39.681 "data_size": 65536 00:13:39.681 }, 00:13:39.681 { 00:13:39.681 "name": "BaseBdev2", 00:13:39.681 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:39.681 "is_configured": true, 00:13:39.681 "data_offset": 0, 00:13:39.681 "data_size": 65536 00:13:39.681 }, 00:13:39.681 { 00:13:39.681 "name": "BaseBdev3", 00:13:39.681 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:39.681 "is_configured": true, 00:13:39.681 "data_offset": 0, 00:13:39.681 "data_size": 65536 00:13:39.681 } 00:13:39.681 ] 00:13:39.681 }' 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.681 15:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.941 "name": "raid_bdev1", 00:13:39.941 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:39.941 "strip_size_kb": 64, 00:13:39.941 "state": "online", 00:13:39.941 "raid_level": "raid5f", 00:13:39.941 "superblock": false, 00:13:39.941 "num_base_bdevs": 3, 00:13:39.941 "num_base_bdevs_discovered": 2, 00:13:39.941 "num_base_bdevs_operational": 2, 00:13:39.941 "base_bdevs_list": [ 00:13:39.941 { 00:13:39.941 "name": null, 00:13:39.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.941 "is_configured": false, 00:13:39.941 "data_offset": 0, 00:13:39.941 "data_size": 65536 00:13:39.941 }, 00:13:39.941 { 00:13:39.941 "name": "BaseBdev2", 00:13:39.941 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:39.941 "is_configured": true, 00:13:39.941 "data_offset": 0, 00:13:39.941 "data_size": 65536 00:13:39.941 }, 00:13:39.941 { 00:13:39.941 "name": "BaseBdev3", 00:13:39.941 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:39.941 "is_configured": true, 00:13:39.941 "data_offset": 0, 00:13:39.941 "data_size": 65536 00:13:39.941 } 00:13:39.941 ] 00:13:39.941 }' 00:13:39.941 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.201 [2024-11-26 15:29:38.473167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.201 [2024-11-26 15:29:38.477414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.201 15:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:40.201 [2024-11-26 15:29:38.479575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.142 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.142 "name": "raid_bdev1", 00:13:41.142 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:41.142 "strip_size_kb": 64, 00:13:41.142 "state": "online", 00:13:41.142 "raid_level": "raid5f", 00:13:41.142 "superblock": false, 00:13:41.142 "num_base_bdevs": 3, 00:13:41.142 "num_base_bdevs_discovered": 3, 00:13:41.142 "num_base_bdevs_operational": 3, 00:13:41.142 "process": { 00:13:41.142 "type": "rebuild", 00:13:41.142 "target": "spare", 00:13:41.142 "progress": { 00:13:41.142 "blocks": 20480, 00:13:41.142 "percent": 15 00:13:41.142 } 00:13:41.142 }, 00:13:41.142 "base_bdevs_list": [ 00:13:41.142 { 00:13:41.142 "name": "spare", 00:13:41.142 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:41.142 "is_configured": true, 00:13:41.142 "data_offset": 0, 00:13:41.142 "data_size": 65536 00:13:41.142 }, 00:13:41.143 { 00:13:41.143 "name": "BaseBdev2", 00:13:41.143 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:41.143 "is_configured": true, 00:13:41.143 "data_offset": 0, 00:13:41.143 "data_size": 65536 00:13:41.143 }, 00:13:41.143 { 00:13:41.143 "name": "BaseBdev3", 00:13:41.143 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:41.143 "is_configured": true, 00:13:41.143 "data_offset": 0, 00:13:41.143 "data_size": 65536 00:13:41.143 } 00:13:41.143 ] 00:13:41.143 }' 00:13:41.143 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.143 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.143 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=438 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.403 "name": "raid_bdev1", 00:13:41.403 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:41.403 "strip_size_kb": 64, 00:13:41.403 "state": "online", 00:13:41.403 "raid_level": "raid5f", 00:13:41.403 "superblock": false, 00:13:41.403 "num_base_bdevs": 3, 00:13:41.403 "num_base_bdevs_discovered": 3, 00:13:41.403 "num_base_bdevs_operational": 3, 00:13:41.403 "process": { 00:13:41.403 "type": "rebuild", 00:13:41.403 "target": "spare", 00:13:41.403 "progress": { 00:13:41.403 "blocks": 22528, 00:13:41.403 "percent": 17 00:13:41.403 } 00:13:41.403 }, 00:13:41.403 "base_bdevs_list": [ 00:13:41.403 { 00:13:41.403 "name": "spare", 00:13:41.403 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:41.403 "is_configured": true, 00:13:41.403 "data_offset": 0, 00:13:41.403 "data_size": 65536 00:13:41.403 }, 00:13:41.403 { 00:13:41.403 "name": "BaseBdev2", 00:13:41.403 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:41.403 "is_configured": true, 00:13:41.403 "data_offset": 0, 00:13:41.403 "data_size": 65536 00:13:41.403 }, 00:13:41.403 { 00:13:41.403 "name": "BaseBdev3", 00:13:41.403 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:41.403 "is_configured": true, 00:13:41.403 "data_offset": 0, 00:13:41.403 "data_size": 65536 00:13:41.403 } 00:13:41.403 ] 00:13:41.403 }' 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.403 15:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.341 15:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.601 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.601 "name": "raid_bdev1", 00:13:42.601 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:42.601 "strip_size_kb": 64, 00:13:42.601 "state": "online", 00:13:42.601 "raid_level": "raid5f", 00:13:42.601 "superblock": false, 00:13:42.601 "num_base_bdevs": 3, 00:13:42.601 "num_base_bdevs_discovered": 3, 00:13:42.601 "num_base_bdevs_operational": 3, 00:13:42.601 "process": { 00:13:42.601 "type": "rebuild", 00:13:42.601 "target": "spare", 00:13:42.601 "progress": { 00:13:42.601 "blocks": 47104, 00:13:42.601 "percent": 35 00:13:42.601 } 00:13:42.601 }, 00:13:42.601 "base_bdevs_list": [ 00:13:42.601 { 00:13:42.601 "name": "spare", 00:13:42.601 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:42.601 "is_configured": true, 00:13:42.601 "data_offset": 0, 00:13:42.601 "data_size": 65536 00:13:42.601 }, 00:13:42.601 { 00:13:42.601 "name": "BaseBdev2", 00:13:42.601 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:42.601 "is_configured": true, 00:13:42.601 "data_offset": 0, 00:13:42.601 "data_size": 65536 00:13:42.601 }, 00:13:42.601 { 00:13:42.601 "name": "BaseBdev3", 00:13:42.601 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:42.601 "is_configured": true, 00:13:42.601 "data_offset": 0, 00:13:42.601 "data_size": 65536 00:13:42.601 } 00:13:42.601 ] 00:13:42.601 }' 00:13:42.601 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.601 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.601 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.601 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.601 15:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.539 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.539 "name": "raid_bdev1", 00:13:43.539 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:43.539 "strip_size_kb": 64, 00:13:43.539 "state": "online", 00:13:43.539 "raid_level": "raid5f", 00:13:43.539 "superblock": false, 00:13:43.539 "num_base_bdevs": 3, 00:13:43.539 "num_base_bdevs_discovered": 3, 00:13:43.539 "num_base_bdevs_operational": 3, 00:13:43.539 "process": { 00:13:43.539 "type": "rebuild", 00:13:43.539 "target": "spare", 00:13:43.539 "progress": { 00:13:43.539 "blocks": 69632, 00:13:43.539 "percent": 53 00:13:43.539 } 00:13:43.539 }, 00:13:43.539 "base_bdevs_list": [ 00:13:43.539 { 00:13:43.539 "name": "spare", 00:13:43.539 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:43.539 "is_configured": true, 00:13:43.539 "data_offset": 0, 00:13:43.539 "data_size": 65536 00:13:43.539 }, 00:13:43.539 { 00:13:43.539 "name": "BaseBdev2", 00:13:43.539 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:43.539 "is_configured": true, 00:13:43.539 "data_offset": 0, 00:13:43.540 "data_size": 65536 00:13:43.540 }, 00:13:43.540 { 00:13:43.540 "name": "BaseBdev3", 00:13:43.540 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:43.540 "is_configured": true, 00:13:43.540 "data_offset": 0, 00:13:43.540 "data_size": 65536 00:13:43.540 } 00:13:43.540 ] 00:13:43.540 }' 00:13:43.540 15:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.799 15:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.799 15:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.799 15:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.799 15:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.738 "name": "raid_bdev1", 00:13:44.738 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:44.738 "strip_size_kb": 64, 00:13:44.738 "state": "online", 00:13:44.738 "raid_level": "raid5f", 00:13:44.738 "superblock": false, 00:13:44.738 "num_base_bdevs": 3, 00:13:44.738 "num_base_bdevs_discovered": 3, 00:13:44.738 "num_base_bdevs_operational": 3, 00:13:44.738 "process": { 00:13:44.738 "type": "rebuild", 00:13:44.738 "target": "spare", 00:13:44.738 "progress": { 00:13:44.738 "blocks": 92160, 00:13:44.738 "percent": 70 00:13:44.738 } 00:13:44.738 }, 00:13:44.738 "base_bdevs_list": [ 00:13:44.738 { 00:13:44.738 "name": "spare", 00:13:44.738 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:44.738 "is_configured": true, 00:13:44.738 "data_offset": 0, 00:13:44.738 "data_size": 65536 00:13:44.738 }, 00:13:44.738 { 00:13:44.738 "name": "BaseBdev2", 00:13:44.738 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:44.738 "is_configured": true, 00:13:44.738 "data_offset": 0, 00:13:44.738 "data_size": 65536 00:13:44.738 }, 00:13:44.738 { 00:13:44.738 "name": "BaseBdev3", 00:13:44.738 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:44.738 "is_configured": true, 00:13:44.738 "data_offset": 0, 00:13:44.738 "data_size": 65536 00:13:44.738 } 00:13:44.738 ] 00:13:44.738 }' 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.738 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.998 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.998 15:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.937 "name": "raid_bdev1", 00:13:45.937 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:45.937 "strip_size_kb": 64, 00:13:45.937 "state": "online", 00:13:45.937 "raid_level": "raid5f", 00:13:45.937 "superblock": false, 00:13:45.937 "num_base_bdevs": 3, 00:13:45.937 "num_base_bdevs_discovered": 3, 00:13:45.937 "num_base_bdevs_operational": 3, 00:13:45.937 "process": { 00:13:45.937 "type": "rebuild", 00:13:45.937 "target": "spare", 00:13:45.937 "progress": { 00:13:45.937 "blocks": 116736, 00:13:45.937 "percent": 89 00:13:45.937 } 00:13:45.937 }, 00:13:45.937 "base_bdevs_list": [ 00:13:45.937 { 00:13:45.937 "name": "spare", 00:13:45.937 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:45.937 "is_configured": true, 00:13:45.937 "data_offset": 0, 00:13:45.937 "data_size": 65536 00:13:45.937 }, 00:13:45.937 { 00:13:45.937 "name": "BaseBdev2", 00:13:45.937 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:45.937 "is_configured": true, 00:13:45.937 "data_offset": 0, 00:13:45.937 "data_size": 65536 00:13:45.937 }, 00:13:45.937 { 00:13:45.937 "name": "BaseBdev3", 00:13:45.937 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:45.937 "is_configured": true, 00:13:45.937 "data_offset": 0, 00:13:45.937 "data_size": 65536 00:13:45.937 } 00:13:45.937 ] 00:13:45.937 }' 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.937 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.938 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.938 15:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.507 [2024-11-26 15:29:44.924170] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:46.507 [2024-11-26 15:29:44.924293] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:46.507 [2024-11-26 15:29:44.924351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.077 "name": "raid_bdev1", 00:13:47.077 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:47.077 "strip_size_kb": 64, 00:13:47.077 "state": "online", 00:13:47.077 "raid_level": "raid5f", 00:13:47.077 "superblock": false, 00:13:47.077 "num_base_bdevs": 3, 00:13:47.077 "num_base_bdevs_discovered": 3, 00:13:47.077 "num_base_bdevs_operational": 3, 00:13:47.077 "base_bdevs_list": [ 00:13:47.077 { 00:13:47.077 "name": "spare", 00:13:47.077 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:47.077 "is_configured": true, 00:13:47.077 "data_offset": 0, 00:13:47.077 "data_size": 65536 00:13:47.077 }, 00:13:47.077 { 00:13:47.077 "name": "BaseBdev2", 00:13:47.077 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:47.077 "is_configured": true, 00:13:47.077 "data_offset": 0, 00:13:47.077 "data_size": 65536 00:13:47.077 }, 00:13:47.077 { 00:13:47.077 "name": "BaseBdev3", 00:13:47.077 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:47.077 "is_configured": true, 00:13:47.077 "data_offset": 0, 00:13:47.077 "data_size": 65536 00:13:47.077 } 00:13:47.077 ] 00:13:47.077 }' 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.077 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.337 "name": "raid_bdev1", 00:13:47.337 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:47.337 "strip_size_kb": 64, 00:13:47.337 "state": "online", 00:13:47.337 "raid_level": "raid5f", 00:13:47.337 "superblock": false, 00:13:47.337 "num_base_bdevs": 3, 00:13:47.337 "num_base_bdevs_discovered": 3, 00:13:47.337 "num_base_bdevs_operational": 3, 00:13:47.337 "base_bdevs_list": [ 00:13:47.337 { 00:13:47.337 "name": "spare", 00:13:47.337 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:47.337 "is_configured": true, 00:13:47.337 "data_offset": 0, 00:13:47.337 "data_size": 65536 00:13:47.337 }, 00:13:47.337 { 00:13:47.337 "name": "BaseBdev2", 00:13:47.337 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:47.337 "is_configured": true, 00:13:47.337 "data_offset": 0, 00:13:47.337 "data_size": 65536 00:13:47.337 }, 00:13:47.337 { 00:13:47.337 "name": "BaseBdev3", 00:13:47.337 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:47.337 "is_configured": true, 00:13:47.337 "data_offset": 0, 00:13:47.337 "data_size": 65536 00:13:47.337 } 00:13:47.337 ] 00:13:47.337 }' 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.337 "name": "raid_bdev1", 00:13:47.337 "uuid": "0ba01d33-de3d-4651-b298-1924363c4fb8", 00:13:47.337 "strip_size_kb": 64, 00:13:47.337 "state": "online", 00:13:47.337 "raid_level": "raid5f", 00:13:47.337 "superblock": false, 00:13:47.337 "num_base_bdevs": 3, 00:13:47.337 "num_base_bdevs_discovered": 3, 00:13:47.337 "num_base_bdevs_operational": 3, 00:13:47.337 "base_bdevs_list": [ 00:13:47.337 { 00:13:47.337 "name": "spare", 00:13:47.337 "uuid": "ff800fa1-9a0b-58c5-b150-7b2d86843d38", 00:13:47.337 "is_configured": true, 00:13:47.337 "data_offset": 0, 00:13:47.337 "data_size": 65536 00:13:47.337 }, 00:13:47.337 { 00:13:47.337 "name": "BaseBdev2", 00:13:47.337 "uuid": "1e5cad42-4025-5940-9853-35b09f4b17c5", 00:13:47.337 "is_configured": true, 00:13:47.337 "data_offset": 0, 00:13:47.337 "data_size": 65536 00:13:47.337 }, 00:13:47.337 { 00:13:47.337 "name": "BaseBdev3", 00:13:47.337 "uuid": "fb37f5ab-f8f7-51ef-937a-952926b6c1e4", 00:13:47.337 "is_configured": true, 00:13:47.337 "data_offset": 0, 00:13:47.337 "data_size": 65536 00:13:47.337 } 00:13:47.337 ] 00:13:47.337 }' 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.337 15:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.907 [2024-11-26 15:29:46.122042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.907 [2024-11-26 15:29:46.122117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.907 [2024-11-26 15:29:46.122261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.907 [2024-11-26 15:29:46.122368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.907 [2024-11-26 15:29:46.122415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.907 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.908 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:47.908 /dev/nbd0 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.167 1+0 records in 00:13:48.167 1+0 records out 00:13:48.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430671 s, 9.5 MB/s 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:48.167 /dev/nbd1 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.167 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.427 1+0 records in 00:13:48.427 1+0 records out 00:13:48.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404541 s, 10.1 MB/s 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.427 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:48.687 15:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 93597 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 93597 ']' 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 93597 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.687 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93597 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.947 killing process with pid 93597 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93597' 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 93597 00:13:48.947 Received shutdown signal, test time was about 60.000000 seconds 00:13:48.947 00:13:48.947 Latency(us) 00:13:48.947 [2024-11-26T15:29:47.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.947 [2024-11-26T15:29:47.426Z] =================================================================================================================== 00:13:48.947 [2024-11-26T15:29:47.426Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:48.947 [2024-11-26 15:29:47.180717] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 93597 00:13:48.947 [2024-11-26 15:29:47.220149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.947 ************************************ 00:13:48.947 END TEST raid5f_rebuild_test 00:13:48.947 ************************************ 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:48.947 00:13:48.947 real 0m13.527s 00:13:48.947 user 0m16.999s 00:13:48.947 sys 0m1.856s 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.947 15:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.207 15:29:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:49.207 15:29:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:49.207 15:29:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.207 15:29:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.207 ************************************ 00:13:49.207 START TEST raid5f_rebuild_test_sb 00:13:49.207 ************************************ 00:13:49.207 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:13:49.207 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:49.207 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=94015 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 94015 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94015 ']' 00:13:49.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.208 15:29:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.208 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:49.208 Zero copy mechanism will not be used. 00:13:49.208 [2024-11-26 15:29:47.586755] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:49.208 [2024-11-26 15:29:47.586895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94015 ] 00:13:49.467 [2024-11-26 15:29:47.720786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:49.467 [2024-11-26 15:29:47.758965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.467 [2024-11-26 15:29:47.785418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.467 [2024-11-26 15:29:47.828021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.467 [2024-11-26 15:29:47.828055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.037 BaseBdev1_malloc 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.037 [2024-11-26 15:29:48.427361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.037 [2024-11-26 15:29:48.427426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.037 [2024-11-26 15:29:48.427450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:50.037 [2024-11-26 15:29:48.427463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.037 [2024-11-26 15:29:48.429556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.037 [2024-11-26 15:29:48.429659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.037 BaseBdev1 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.037 BaseBdev2_malloc 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:50.037 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.038 [2024-11-26 15:29:48.455784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:50.038 [2024-11-26 15:29:48.455836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.038 [2024-11-26 15:29:48.455852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:50.038 [2024-11-26 15:29:48.455862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.038 [2024-11-26 15:29:48.457873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.038 [2024-11-26 15:29:48.457912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.038 BaseBdev2 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.038 BaseBdev3_malloc 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.038 [2024-11-26 15:29:48.484158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:50.038 [2024-11-26 15:29:48.484217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.038 [2024-11-26 15:29:48.484251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:50.038 [2024-11-26 15:29:48.484262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.038 [2024-11-26 15:29:48.486275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.038 [2024-11-26 15:29:48.486351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:50.038 BaseBdev3 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.038 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.299 spare_malloc 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.299 spare_delay 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.299 [2024-11-26 15:29:48.545008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.299 [2024-11-26 15:29:48.545070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.299 [2024-11-26 15:29:48.545092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:50.299 [2024-11-26 15:29:48.545105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.299 [2024-11-26 15:29:48.547594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.299 [2024-11-26 15:29:48.547639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:50.299 spare 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.299 [2024-11-26 15:29:48.557052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.299 [2024-11-26 15:29:48.558784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.299 [2024-11-26 15:29:48.558913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.299 [2024-11-26 15:29:48.559068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:50.299 [2024-11-26 15:29:48.559084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:50.299 [2024-11-26 15:29:48.559366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:50.299 [2024-11-26 15:29:48.559736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:50.299 [2024-11-26 15:29:48.559750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:50.299 [2024-11-26 15:29:48.559847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.299 "name": "raid_bdev1", 00:13:50.299 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:50.299 "strip_size_kb": 64, 00:13:50.299 "state": "online", 00:13:50.299 "raid_level": "raid5f", 00:13:50.299 "superblock": true, 00:13:50.299 "num_base_bdevs": 3, 00:13:50.299 "num_base_bdevs_discovered": 3, 00:13:50.299 "num_base_bdevs_operational": 3, 00:13:50.299 "base_bdevs_list": [ 00:13:50.299 { 00:13:50.299 "name": "BaseBdev1", 00:13:50.299 "uuid": "351d4678-e40f-5f5a-b596-b73f60cac035", 00:13:50.299 "is_configured": true, 00:13:50.299 "data_offset": 2048, 00:13:50.299 "data_size": 63488 00:13:50.299 }, 00:13:50.299 { 00:13:50.299 "name": "BaseBdev2", 00:13:50.299 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:50.299 "is_configured": true, 00:13:50.299 "data_offset": 2048, 00:13:50.299 "data_size": 63488 00:13:50.299 }, 00:13:50.299 { 00:13:50.299 "name": "BaseBdev3", 00:13:50.299 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:50.299 "is_configured": true, 00:13:50.299 "data_offset": 2048, 00:13:50.299 "data_size": 63488 00:13:50.299 } 00:13:50.299 ] 00:13:50.299 }' 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.299 15:29:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.869 [2024-11-26 15:29:49.049566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.869 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:50.869 [2024-11-26 15:29:49.333484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:13:51.129 /dev/nbd0 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.129 1+0 records in 00:13:51.129 1+0 records out 00:13:51.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594236 s, 6.9 MB/s 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:51.129 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:51.388 496+0 records in 00:13:51.388 496+0 records out 00:13:51.389 65011712 bytes (65 MB, 62 MiB) copied, 0.278972 s, 233 MB/s 00:13:51.389 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:51.389 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.389 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:51.389 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:51.389 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:51.389 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:51.389 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:51.648 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.648 [2024-11-26 15:29:49.911289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.648 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.649 [2024-11-26 15:29:49.933053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.649 "name": "raid_bdev1", 00:13:51.649 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:51.649 "strip_size_kb": 64, 00:13:51.649 "state": "online", 00:13:51.649 "raid_level": "raid5f", 00:13:51.649 "superblock": true, 00:13:51.649 "num_base_bdevs": 3, 00:13:51.649 "num_base_bdevs_discovered": 2, 00:13:51.649 "num_base_bdevs_operational": 2, 00:13:51.649 "base_bdevs_list": [ 00:13:51.649 { 00:13:51.649 "name": null, 00:13:51.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.649 "is_configured": false, 00:13:51.649 "data_offset": 0, 00:13:51.649 "data_size": 63488 00:13:51.649 }, 00:13:51.649 { 00:13:51.649 "name": "BaseBdev2", 00:13:51.649 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:51.649 "is_configured": true, 00:13:51.649 "data_offset": 2048, 00:13:51.649 "data_size": 63488 00:13:51.649 }, 00:13:51.649 { 00:13:51.649 "name": "BaseBdev3", 00:13:51.649 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:51.649 "is_configured": true, 00:13:51.649 "data_offset": 2048, 00:13:51.649 "data_size": 63488 00:13:51.649 } 00:13:51.649 ] 00:13:51.649 }' 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.649 15:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.909 15:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.909 15:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.909 15:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.909 [2024-11-26 15:29:50.365198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.909 [2024-11-26 15:29:50.369739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029390 00:13:51.909 15:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.909 15:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:51.909 [2024-11-26 15:29:50.371896] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.291 "name": "raid_bdev1", 00:13:53.291 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:53.291 "strip_size_kb": 64, 00:13:53.291 "state": "online", 00:13:53.291 "raid_level": "raid5f", 00:13:53.291 "superblock": true, 00:13:53.291 "num_base_bdevs": 3, 00:13:53.291 "num_base_bdevs_discovered": 3, 00:13:53.291 "num_base_bdevs_operational": 3, 00:13:53.291 "process": { 00:13:53.291 "type": "rebuild", 00:13:53.291 "target": "spare", 00:13:53.291 "progress": { 00:13:53.291 "blocks": 20480, 00:13:53.291 "percent": 16 00:13:53.291 } 00:13:53.291 }, 00:13:53.291 "base_bdevs_list": [ 00:13:53.291 { 00:13:53.291 "name": "spare", 00:13:53.291 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:13:53.291 "is_configured": true, 00:13:53.291 "data_offset": 2048, 00:13:53.291 "data_size": 63488 00:13:53.291 }, 00:13:53.291 { 00:13:53.291 "name": "BaseBdev2", 00:13:53.291 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:53.291 "is_configured": true, 00:13:53.291 "data_offset": 2048, 00:13:53.291 "data_size": 63488 00:13:53.291 }, 00:13:53.291 { 00:13:53.291 "name": "BaseBdev3", 00:13:53.291 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:53.291 "is_configured": true, 00:13:53.291 "data_offset": 2048, 00:13:53.291 "data_size": 63488 00:13:53.291 } 00:13:53.291 ] 00:13:53.291 }' 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.291 [2024-11-26 15:29:51.506158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.291 [2024-11-26 15:29:51.580922] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:53.291 [2024-11-26 15:29:51.581038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.291 [2024-11-26 15:29:51.581057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:53.291 [2024-11-26 15:29:51.581065] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.291 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.292 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.292 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.292 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.292 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.292 "name": "raid_bdev1", 00:13:53.292 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:53.292 "strip_size_kb": 64, 00:13:53.292 "state": "online", 00:13:53.292 "raid_level": "raid5f", 00:13:53.292 "superblock": true, 00:13:53.292 "num_base_bdevs": 3, 00:13:53.292 "num_base_bdevs_discovered": 2, 00:13:53.292 "num_base_bdevs_operational": 2, 00:13:53.292 "base_bdevs_list": [ 00:13:53.292 { 00:13:53.292 "name": null, 00:13:53.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.292 "is_configured": false, 00:13:53.292 "data_offset": 0, 00:13:53.292 "data_size": 63488 00:13:53.292 }, 00:13:53.292 { 00:13:53.292 "name": "BaseBdev2", 00:13:53.292 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:53.292 "is_configured": true, 00:13:53.292 "data_offset": 2048, 00:13:53.292 "data_size": 63488 00:13:53.292 }, 00:13:53.292 { 00:13:53.292 "name": "BaseBdev3", 00:13:53.292 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:53.292 "is_configured": true, 00:13:53.292 "data_offset": 2048, 00:13:53.292 "data_size": 63488 00:13:53.292 } 00:13:53.292 ] 00:13:53.292 }' 00:13:53.292 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.292 15:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.861 "name": "raid_bdev1", 00:13:53.861 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:53.861 "strip_size_kb": 64, 00:13:53.861 "state": "online", 00:13:53.861 "raid_level": "raid5f", 00:13:53.861 "superblock": true, 00:13:53.861 "num_base_bdevs": 3, 00:13:53.861 "num_base_bdevs_discovered": 2, 00:13:53.861 "num_base_bdevs_operational": 2, 00:13:53.861 "base_bdevs_list": [ 00:13:53.861 { 00:13:53.861 "name": null, 00:13:53.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.861 "is_configured": false, 00:13:53.861 "data_offset": 0, 00:13:53.861 "data_size": 63488 00:13:53.861 }, 00:13:53.861 { 00:13:53.861 "name": "BaseBdev2", 00:13:53.861 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:53.861 "is_configured": true, 00:13:53.861 "data_offset": 2048, 00:13:53.861 "data_size": 63488 00:13:53.861 }, 00:13:53.861 { 00:13:53.861 "name": "BaseBdev3", 00:13:53.861 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:53.861 "is_configured": true, 00:13:53.861 "data_offset": 2048, 00:13:53.861 "data_size": 63488 00:13:53.861 } 00:13:53.861 ] 00:13:53.861 }' 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.861 [2024-11-26 15:29:52.203063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.861 [2024-11-26 15:29:52.207518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029460 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.861 15:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:53.861 [2024-11-26 15:29:52.209672] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.801 "name": "raid_bdev1", 00:13:54.801 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:54.801 "strip_size_kb": 64, 00:13:54.801 "state": "online", 00:13:54.801 "raid_level": "raid5f", 00:13:54.801 "superblock": true, 00:13:54.801 "num_base_bdevs": 3, 00:13:54.801 "num_base_bdevs_discovered": 3, 00:13:54.801 "num_base_bdevs_operational": 3, 00:13:54.801 "process": { 00:13:54.801 "type": "rebuild", 00:13:54.801 "target": "spare", 00:13:54.801 "progress": { 00:13:54.801 "blocks": 20480, 00:13:54.801 "percent": 16 00:13:54.801 } 00:13:54.801 }, 00:13:54.801 "base_bdevs_list": [ 00:13:54.801 { 00:13:54.801 "name": "spare", 00:13:54.801 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:13:54.801 "is_configured": true, 00:13:54.801 "data_offset": 2048, 00:13:54.801 "data_size": 63488 00:13:54.801 }, 00:13:54.801 { 00:13:54.801 "name": "BaseBdev2", 00:13:54.801 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:54.801 "is_configured": true, 00:13:54.801 "data_offset": 2048, 00:13:54.801 "data_size": 63488 00:13:54.801 }, 00:13:54.801 { 00:13:54.801 "name": "BaseBdev3", 00:13:54.801 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:54.801 "is_configured": true, 00:13:54.801 "data_offset": 2048, 00:13:54.801 "data_size": 63488 00:13:54.801 } 00:13:54.801 ] 00:13:54.801 }' 00:13:54.801 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:55.061 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.061 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.061 "name": "raid_bdev1", 00:13:55.061 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:55.061 "strip_size_kb": 64, 00:13:55.061 "state": "online", 00:13:55.061 "raid_level": "raid5f", 00:13:55.061 "superblock": true, 00:13:55.061 "num_base_bdevs": 3, 00:13:55.061 "num_base_bdevs_discovered": 3, 00:13:55.061 "num_base_bdevs_operational": 3, 00:13:55.061 "process": { 00:13:55.061 "type": "rebuild", 00:13:55.061 "target": "spare", 00:13:55.061 "progress": { 00:13:55.061 "blocks": 22528, 00:13:55.061 "percent": 17 00:13:55.061 } 00:13:55.061 }, 00:13:55.061 "base_bdevs_list": [ 00:13:55.061 { 00:13:55.061 "name": "spare", 00:13:55.061 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:13:55.061 "is_configured": true, 00:13:55.061 "data_offset": 2048, 00:13:55.061 "data_size": 63488 00:13:55.061 }, 00:13:55.061 { 00:13:55.062 "name": "BaseBdev2", 00:13:55.062 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:55.062 "is_configured": true, 00:13:55.062 "data_offset": 2048, 00:13:55.062 "data_size": 63488 00:13:55.062 }, 00:13:55.062 { 00:13:55.062 "name": "BaseBdev3", 00:13:55.062 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:55.062 "is_configured": true, 00:13:55.062 "data_offset": 2048, 00:13:55.062 "data_size": 63488 00:13:55.062 } 00:13:55.062 ] 00:13:55.062 }' 00:13:55.062 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.062 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.062 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.062 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.062 15:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.456 "name": "raid_bdev1", 00:13:56.456 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:56.456 "strip_size_kb": 64, 00:13:56.456 "state": "online", 00:13:56.456 "raid_level": "raid5f", 00:13:56.456 "superblock": true, 00:13:56.456 "num_base_bdevs": 3, 00:13:56.456 "num_base_bdevs_discovered": 3, 00:13:56.456 "num_base_bdevs_operational": 3, 00:13:56.456 "process": { 00:13:56.456 "type": "rebuild", 00:13:56.456 "target": "spare", 00:13:56.456 "progress": { 00:13:56.456 "blocks": 45056, 00:13:56.456 "percent": 35 00:13:56.456 } 00:13:56.456 }, 00:13:56.456 "base_bdevs_list": [ 00:13:56.456 { 00:13:56.456 "name": "spare", 00:13:56.456 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:13:56.456 "is_configured": true, 00:13:56.456 "data_offset": 2048, 00:13:56.456 "data_size": 63488 00:13:56.456 }, 00:13:56.456 { 00:13:56.456 "name": "BaseBdev2", 00:13:56.456 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:56.456 "is_configured": true, 00:13:56.456 "data_offset": 2048, 00:13:56.456 "data_size": 63488 00:13:56.456 }, 00:13:56.456 { 00:13:56.456 "name": "BaseBdev3", 00:13:56.456 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:56.456 "is_configured": true, 00:13:56.456 "data_offset": 2048, 00:13:56.456 "data_size": 63488 00:13:56.456 } 00:13:56.456 ] 00:13:56.456 }' 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.456 15:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.457 "name": "raid_bdev1", 00:13:57.457 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:57.457 "strip_size_kb": 64, 00:13:57.457 "state": "online", 00:13:57.457 "raid_level": "raid5f", 00:13:57.457 "superblock": true, 00:13:57.457 "num_base_bdevs": 3, 00:13:57.457 "num_base_bdevs_discovered": 3, 00:13:57.457 "num_base_bdevs_operational": 3, 00:13:57.457 "process": { 00:13:57.457 "type": "rebuild", 00:13:57.457 "target": "spare", 00:13:57.457 "progress": { 00:13:57.457 "blocks": 69632, 00:13:57.457 "percent": 54 00:13:57.457 } 00:13:57.457 }, 00:13:57.457 "base_bdevs_list": [ 00:13:57.457 { 00:13:57.457 "name": "spare", 00:13:57.457 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:13:57.457 "is_configured": true, 00:13:57.457 "data_offset": 2048, 00:13:57.457 "data_size": 63488 00:13:57.457 }, 00:13:57.457 { 00:13:57.457 "name": "BaseBdev2", 00:13:57.457 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:57.457 "is_configured": true, 00:13:57.457 "data_offset": 2048, 00:13:57.457 "data_size": 63488 00:13:57.457 }, 00:13:57.457 { 00:13:57.457 "name": "BaseBdev3", 00:13:57.457 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:57.457 "is_configured": true, 00:13:57.457 "data_offset": 2048, 00:13:57.457 "data_size": 63488 00:13:57.457 } 00:13:57.457 ] 00:13:57.457 }' 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.457 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.458 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.458 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.458 15:29:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.398 "name": "raid_bdev1", 00:13:58.398 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:58.398 "strip_size_kb": 64, 00:13:58.398 "state": "online", 00:13:58.398 "raid_level": "raid5f", 00:13:58.398 "superblock": true, 00:13:58.398 "num_base_bdevs": 3, 00:13:58.398 "num_base_bdevs_discovered": 3, 00:13:58.398 "num_base_bdevs_operational": 3, 00:13:58.398 "process": { 00:13:58.398 "type": "rebuild", 00:13:58.398 "target": "spare", 00:13:58.398 "progress": { 00:13:58.398 "blocks": 92160, 00:13:58.398 "percent": 72 00:13:58.398 } 00:13:58.398 }, 00:13:58.398 "base_bdevs_list": [ 00:13:58.398 { 00:13:58.398 "name": "spare", 00:13:58.398 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:13:58.398 "is_configured": true, 00:13:58.398 "data_offset": 2048, 00:13:58.398 "data_size": 63488 00:13:58.398 }, 00:13:58.398 { 00:13:58.398 "name": "BaseBdev2", 00:13:58.398 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:58.398 "is_configured": true, 00:13:58.398 "data_offset": 2048, 00:13:58.398 "data_size": 63488 00:13:58.398 }, 00:13:58.398 { 00:13:58.398 "name": "BaseBdev3", 00:13:58.398 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:58.398 "is_configured": true, 00:13:58.398 "data_offset": 2048, 00:13:58.398 "data_size": 63488 00:13:58.398 } 00:13:58.398 ] 00:13:58.398 }' 00:13:58.398 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.658 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.658 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.658 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.658 15:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.597 "name": "raid_bdev1", 00:13:59.597 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:13:59.597 "strip_size_kb": 64, 00:13:59.597 "state": "online", 00:13:59.597 "raid_level": "raid5f", 00:13:59.597 "superblock": true, 00:13:59.597 "num_base_bdevs": 3, 00:13:59.597 "num_base_bdevs_discovered": 3, 00:13:59.597 "num_base_bdevs_operational": 3, 00:13:59.597 "process": { 00:13:59.597 "type": "rebuild", 00:13:59.597 "target": "spare", 00:13:59.597 "progress": { 00:13:59.597 "blocks": 114688, 00:13:59.597 "percent": 90 00:13:59.597 } 00:13:59.597 }, 00:13:59.597 "base_bdevs_list": [ 00:13:59.597 { 00:13:59.597 "name": "spare", 00:13:59.597 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:13:59.597 "is_configured": true, 00:13:59.597 "data_offset": 2048, 00:13:59.597 "data_size": 63488 00:13:59.597 }, 00:13:59.597 { 00:13:59.597 "name": "BaseBdev2", 00:13:59.597 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:13:59.597 "is_configured": true, 00:13:59.597 "data_offset": 2048, 00:13:59.597 "data_size": 63488 00:13:59.597 }, 00:13:59.597 { 00:13:59.597 "name": "BaseBdev3", 00:13:59.597 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:13:59.597 "is_configured": true, 00:13:59.597 "data_offset": 2048, 00:13:59.597 "data_size": 63488 00:13:59.597 } 00:13:59.597 ] 00:13:59.597 }' 00:13:59.597 15:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.597 15:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.597 15:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.597 15:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.597 15:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.167 [2024-11-26 15:29:58.453023] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:00.167 [2024-11-26 15:29:58.453147] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:00.167 [2024-11-26 15:29:58.453300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.737 "name": "raid_bdev1", 00:14:00.737 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:00.737 "strip_size_kb": 64, 00:14:00.737 "state": "online", 00:14:00.737 "raid_level": "raid5f", 00:14:00.737 "superblock": true, 00:14:00.737 "num_base_bdevs": 3, 00:14:00.737 "num_base_bdevs_discovered": 3, 00:14:00.737 "num_base_bdevs_operational": 3, 00:14:00.737 "base_bdevs_list": [ 00:14:00.737 { 00:14:00.737 "name": "spare", 00:14:00.737 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:14:00.737 "is_configured": true, 00:14:00.737 "data_offset": 2048, 00:14:00.737 "data_size": 63488 00:14:00.737 }, 00:14:00.737 { 00:14:00.737 "name": "BaseBdev2", 00:14:00.737 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:00.737 "is_configured": true, 00:14:00.737 "data_offset": 2048, 00:14:00.737 "data_size": 63488 00:14:00.737 }, 00:14:00.737 { 00:14:00.737 "name": "BaseBdev3", 00:14:00.737 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:00.737 "is_configured": true, 00:14:00.737 "data_offset": 2048, 00:14:00.737 "data_size": 63488 00:14:00.737 } 00:14:00.737 ] 00:14:00.737 }' 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.737 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.997 "name": "raid_bdev1", 00:14:00.997 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:00.997 "strip_size_kb": 64, 00:14:00.997 "state": "online", 00:14:00.997 "raid_level": "raid5f", 00:14:00.997 "superblock": true, 00:14:00.997 "num_base_bdevs": 3, 00:14:00.997 "num_base_bdevs_discovered": 3, 00:14:00.997 "num_base_bdevs_operational": 3, 00:14:00.997 "base_bdevs_list": [ 00:14:00.997 { 00:14:00.997 "name": "spare", 00:14:00.997 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:14:00.997 "is_configured": true, 00:14:00.997 "data_offset": 2048, 00:14:00.997 "data_size": 63488 00:14:00.997 }, 00:14:00.997 { 00:14:00.997 "name": "BaseBdev2", 00:14:00.997 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:00.997 "is_configured": true, 00:14:00.997 "data_offset": 2048, 00:14:00.997 "data_size": 63488 00:14:00.997 }, 00:14:00.997 { 00:14:00.997 "name": "BaseBdev3", 00:14:00.997 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:00.997 "is_configured": true, 00:14:00.997 "data_offset": 2048, 00:14:00.997 "data_size": 63488 00:14:00.997 } 00:14:00.997 ] 00:14:00.997 }' 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.997 "name": "raid_bdev1", 00:14:00.997 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:00.997 "strip_size_kb": 64, 00:14:00.997 "state": "online", 00:14:00.997 "raid_level": "raid5f", 00:14:00.997 "superblock": true, 00:14:00.997 "num_base_bdevs": 3, 00:14:00.997 "num_base_bdevs_discovered": 3, 00:14:00.997 "num_base_bdevs_operational": 3, 00:14:00.997 "base_bdevs_list": [ 00:14:00.997 { 00:14:00.997 "name": "spare", 00:14:00.997 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:14:00.997 "is_configured": true, 00:14:00.997 "data_offset": 2048, 00:14:00.997 "data_size": 63488 00:14:00.997 }, 00:14:00.997 { 00:14:00.997 "name": "BaseBdev2", 00:14:00.997 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:00.997 "is_configured": true, 00:14:00.997 "data_offset": 2048, 00:14:00.997 "data_size": 63488 00:14:00.997 }, 00:14:00.997 { 00:14:00.997 "name": "BaseBdev3", 00:14:00.997 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:00.997 "is_configured": true, 00:14:00.997 "data_offset": 2048, 00:14:00.997 "data_size": 63488 00:14:00.997 } 00:14:00.997 ] 00:14:00.997 }' 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.997 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.566 [2024-11-26 15:29:59.786992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.566 [2024-11-26 15:29:59.787063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.566 [2024-11-26 15:29:59.787175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.566 [2024-11-26 15:29:59.787299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.566 [2024-11-26 15:29:59.787363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.566 15:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:01.566 /dev/nbd0 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.826 1+0 records in 00:14:01.826 1+0 records out 00:14:01.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290642 s, 14.1 MB/s 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.826 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:01.826 /dev/nbd1 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.827 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.086 1+0 records in 00:14:02.086 1+0 records out 00:14:02.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334979 s, 12.2 MB/s 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.086 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.347 [2024-11-26 15:30:00.789390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.347 [2024-11-26 15:30:00.789456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.347 [2024-11-26 15:30:00.789478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:02.347 [2024-11-26 15:30:00.789489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.347 [2024-11-26 15:30:00.791570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.347 [2024-11-26 15:30:00.791609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.347 [2024-11-26 15:30:00.791686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.347 [2024-11-26 15:30:00.791724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.347 [2024-11-26 15:30:00.791831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.347 [2024-11-26 15:30:00.791956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.347 spare 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.347 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.607 [2024-11-26 15:30:00.892016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:02.607 [2024-11-26 15:30:00.892083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:02.607 [2024-11-26 15:30:00.892410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047b10 00:14:02.607 [2024-11-26 15:30:00.892877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:02.607 [2024-11-26 15:30:00.892927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:02.607 [2024-11-26 15:30:00.893204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.607 "name": "raid_bdev1", 00:14:02.607 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:02.607 "strip_size_kb": 64, 00:14:02.607 "state": "online", 00:14:02.607 "raid_level": "raid5f", 00:14:02.607 "superblock": true, 00:14:02.607 "num_base_bdevs": 3, 00:14:02.607 "num_base_bdevs_discovered": 3, 00:14:02.607 "num_base_bdevs_operational": 3, 00:14:02.607 "base_bdevs_list": [ 00:14:02.607 { 00:14:02.607 "name": "spare", 00:14:02.607 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:14:02.607 "is_configured": true, 00:14:02.607 "data_offset": 2048, 00:14:02.607 "data_size": 63488 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "name": "BaseBdev2", 00:14:02.607 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:02.607 "is_configured": true, 00:14:02.607 "data_offset": 2048, 00:14:02.607 "data_size": 63488 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "name": "BaseBdev3", 00:14:02.607 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:02.607 "is_configured": true, 00:14:02.607 "data_offset": 2048, 00:14:02.607 "data_size": 63488 00:14:02.607 } 00:14:02.607 ] 00:14:02.607 }' 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.607 15:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.177 "name": "raid_bdev1", 00:14:03.177 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:03.177 "strip_size_kb": 64, 00:14:03.177 "state": "online", 00:14:03.177 "raid_level": "raid5f", 00:14:03.177 "superblock": true, 00:14:03.177 "num_base_bdevs": 3, 00:14:03.177 "num_base_bdevs_discovered": 3, 00:14:03.177 "num_base_bdevs_operational": 3, 00:14:03.177 "base_bdevs_list": [ 00:14:03.177 { 00:14:03.177 "name": "spare", 00:14:03.177 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:14:03.177 "is_configured": true, 00:14:03.177 "data_offset": 2048, 00:14:03.177 "data_size": 63488 00:14:03.177 }, 00:14:03.177 { 00:14:03.177 "name": "BaseBdev2", 00:14:03.177 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:03.177 "is_configured": true, 00:14:03.177 "data_offset": 2048, 00:14:03.177 "data_size": 63488 00:14:03.177 }, 00:14:03.177 { 00:14:03.177 "name": "BaseBdev3", 00:14:03.177 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:03.177 "is_configured": true, 00:14:03.177 "data_offset": 2048, 00:14:03.177 "data_size": 63488 00:14:03.177 } 00:14:03.177 ] 00:14:03.177 }' 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 [2024-11-26 15:30:01.562602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.177 "name": "raid_bdev1", 00:14:03.177 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:03.177 "strip_size_kb": 64, 00:14:03.177 "state": "online", 00:14:03.177 "raid_level": "raid5f", 00:14:03.177 "superblock": true, 00:14:03.177 "num_base_bdevs": 3, 00:14:03.177 "num_base_bdevs_discovered": 2, 00:14:03.177 "num_base_bdevs_operational": 2, 00:14:03.177 "base_bdevs_list": [ 00:14:03.177 { 00:14:03.177 "name": null, 00:14:03.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.177 "is_configured": false, 00:14:03.177 "data_offset": 0, 00:14:03.177 "data_size": 63488 00:14:03.177 }, 00:14:03.177 { 00:14:03.177 "name": "BaseBdev2", 00:14:03.177 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:03.177 "is_configured": true, 00:14:03.177 "data_offset": 2048, 00:14:03.177 "data_size": 63488 00:14:03.177 }, 00:14:03.177 { 00:14:03.177 "name": "BaseBdev3", 00:14:03.177 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:03.177 "is_configured": true, 00:14:03.177 "data_offset": 2048, 00:14:03.177 "data_size": 63488 00:14:03.177 } 00:14:03.177 ] 00:14:03.177 }' 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.177 15:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.747 15:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.747 15:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.747 15:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.747 [2024-11-26 15:30:02.014752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.747 [2024-11-26 15:30:02.014969] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:03.747 [2024-11-26 15:30:02.014992] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:03.747 [2024-11-26 15:30:02.015026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.747 [2024-11-26 15:30:02.019396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047be0 00:14:03.747 15:30:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.747 15:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:03.747 [2024-11-26 15:30:02.021605] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.688 "name": "raid_bdev1", 00:14:04.688 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:04.688 "strip_size_kb": 64, 00:14:04.688 "state": "online", 00:14:04.688 "raid_level": "raid5f", 00:14:04.688 "superblock": true, 00:14:04.688 "num_base_bdevs": 3, 00:14:04.688 "num_base_bdevs_discovered": 3, 00:14:04.688 "num_base_bdevs_operational": 3, 00:14:04.688 "process": { 00:14:04.688 "type": "rebuild", 00:14:04.688 "target": "spare", 00:14:04.688 "progress": { 00:14:04.688 "blocks": 20480, 00:14:04.688 "percent": 16 00:14:04.688 } 00:14:04.688 }, 00:14:04.688 "base_bdevs_list": [ 00:14:04.688 { 00:14:04.688 "name": "spare", 00:14:04.688 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:14:04.688 "is_configured": true, 00:14:04.688 "data_offset": 2048, 00:14:04.688 "data_size": 63488 00:14:04.688 }, 00:14:04.688 { 00:14:04.688 "name": "BaseBdev2", 00:14:04.688 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:04.688 "is_configured": true, 00:14:04.688 "data_offset": 2048, 00:14:04.688 "data_size": 63488 00:14:04.688 }, 00:14:04.688 { 00:14:04.688 "name": "BaseBdev3", 00:14:04.688 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:04.688 "is_configured": true, 00:14:04.688 "data_offset": 2048, 00:14:04.688 "data_size": 63488 00:14:04.688 } 00:14:04.688 ] 00:14:04.688 }' 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.688 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.948 [2024-11-26 15:30:03.175776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.948 [2024-11-26 15:30:03.230453] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.948 [2024-11-26 15:30:03.230587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.948 [2024-11-26 15:30:03.230630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.948 [2024-11-26 15:30:03.230664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.948 "name": "raid_bdev1", 00:14:04.948 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:04.948 "strip_size_kb": 64, 00:14:04.948 "state": "online", 00:14:04.948 "raid_level": "raid5f", 00:14:04.948 "superblock": true, 00:14:04.948 "num_base_bdevs": 3, 00:14:04.948 "num_base_bdevs_discovered": 2, 00:14:04.948 "num_base_bdevs_operational": 2, 00:14:04.948 "base_bdevs_list": [ 00:14:04.948 { 00:14:04.948 "name": null, 00:14:04.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.948 "is_configured": false, 00:14:04.948 "data_offset": 0, 00:14:04.948 "data_size": 63488 00:14:04.948 }, 00:14:04.948 { 00:14:04.948 "name": "BaseBdev2", 00:14:04.948 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:04.948 "is_configured": true, 00:14:04.948 "data_offset": 2048, 00:14:04.948 "data_size": 63488 00:14:04.948 }, 00:14:04.948 { 00:14:04.948 "name": "BaseBdev3", 00:14:04.948 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:04.948 "is_configured": true, 00:14:04.948 "data_offset": 2048, 00:14:04.948 "data_size": 63488 00:14:04.948 } 00:14:04.948 ] 00:14:04.948 }' 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.948 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.207 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.207 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.207 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.207 [2024-11-26 15:30:03.676332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.207 [2024-11-26 15:30:03.676442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.207 [2024-11-26 15:30:03.676481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:05.207 [2024-11-26 15:30:03.676511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.207 [2024-11-26 15:30:03.676984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.207 [2024-11-26 15:30:03.677050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.207 [2024-11-26 15:30:03.677157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:05.207 [2024-11-26 15:30:03.677216] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:05.207 [2024-11-26 15:30:03.677260] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:05.207 [2024-11-26 15:30:03.677350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.466 [2024-11-26 15:30:03.681749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047cb0 00:14:05.466 spare 00:14:05.466 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.466 15:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:05.466 [2024-11-26 15:30:03.683916] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.406 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.406 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.406 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.406 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.406 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.406 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.407 "name": "raid_bdev1", 00:14:06.407 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:06.407 "strip_size_kb": 64, 00:14:06.407 "state": "online", 00:14:06.407 "raid_level": "raid5f", 00:14:06.407 "superblock": true, 00:14:06.407 "num_base_bdevs": 3, 00:14:06.407 "num_base_bdevs_discovered": 3, 00:14:06.407 "num_base_bdevs_operational": 3, 00:14:06.407 "process": { 00:14:06.407 "type": "rebuild", 00:14:06.407 "target": "spare", 00:14:06.407 "progress": { 00:14:06.407 "blocks": 20480, 00:14:06.407 "percent": 16 00:14:06.407 } 00:14:06.407 }, 00:14:06.407 "base_bdevs_list": [ 00:14:06.407 { 00:14:06.407 "name": "spare", 00:14:06.407 "uuid": "e275f522-0b72-5061-9b26-e130c2d4052a", 00:14:06.407 "is_configured": true, 00:14:06.407 "data_offset": 2048, 00:14:06.407 "data_size": 63488 00:14:06.407 }, 00:14:06.407 { 00:14:06.407 "name": "BaseBdev2", 00:14:06.407 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:06.407 "is_configured": true, 00:14:06.407 "data_offset": 2048, 00:14:06.407 "data_size": 63488 00:14:06.407 }, 00:14:06.407 { 00:14:06.407 "name": "BaseBdev3", 00:14:06.407 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:06.407 "is_configured": true, 00:14:06.407 "data_offset": 2048, 00:14:06.407 "data_size": 63488 00:14:06.407 } 00:14:06.407 ] 00:14:06.407 }' 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.407 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.407 [2024-11-26 15:30:04.806366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.668 [2024-11-26 15:30:04.893108] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.668 [2024-11-26 15:30:04.893235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.668 [2024-11-26 15:30:04.893261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.668 [2024-11-26 15:30:04.893270] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.668 "name": "raid_bdev1", 00:14:06.668 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:06.668 "strip_size_kb": 64, 00:14:06.668 "state": "online", 00:14:06.668 "raid_level": "raid5f", 00:14:06.668 "superblock": true, 00:14:06.668 "num_base_bdevs": 3, 00:14:06.668 "num_base_bdevs_discovered": 2, 00:14:06.668 "num_base_bdevs_operational": 2, 00:14:06.668 "base_bdevs_list": [ 00:14:06.668 { 00:14:06.668 "name": null, 00:14:06.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.668 "is_configured": false, 00:14:06.668 "data_offset": 0, 00:14:06.668 "data_size": 63488 00:14:06.668 }, 00:14:06.668 { 00:14:06.668 "name": "BaseBdev2", 00:14:06.668 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:06.668 "is_configured": true, 00:14:06.668 "data_offset": 2048, 00:14:06.668 "data_size": 63488 00:14:06.668 }, 00:14:06.668 { 00:14:06.668 "name": "BaseBdev3", 00:14:06.668 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:06.668 "is_configured": true, 00:14:06.668 "data_offset": 2048, 00:14:06.668 "data_size": 63488 00:14:06.668 } 00:14:06.668 ] 00:14:06.668 }' 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.668 15:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.929 "name": "raid_bdev1", 00:14:06.929 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:06.929 "strip_size_kb": 64, 00:14:06.929 "state": "online", 00:14:06.929 "raid_level": "raid5f", 00:14:06.929 "superblock": true, 00:14:06.929 "num_base_bdevs": 3, 00:14:06.929 "num_base_bdevs_discovered": 2, 00:14:06.929 "num_base_bdevs_operational": 2, 00:14:06.929 "base_bdevs_list": [ 00:14:06.929 { 00:14:06.929 "name": null, 00:14:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.929 "is_configured": false, 00:14:06.929 "data_offset": 0, 00:14:06.929 "data_size": 63488 00:14:06.929 }, 00:14:06.929 { 00:14:06.929 "name": "BaseBdev2", 00:14:06.929 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:06.929 "is_configured": true, 00:14:06.929 "data_offset": 2048, 00:14:06.929 "data_size": 63488 00:14:06.929 }, 00:14:06.929 { 00:14:06.929 "name": "BaseBdev3", 00:14:06.929 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:06.929 "is_configured": true, 00:14:06.929 "data_offset": 2048, 00:14:06.929 "data_size": 63488 00:14:06.929 } 00:14:06.929 ] 00:14:06.929 }' 00:14:06.929 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.189 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.189 [2024-11-26 15:30:05.514789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.189 [2024-11-26 15:30:05.514843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.189 [2024-11-26 15:30:05.514865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:07.189 [2024-11-26 15:30:05.514874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.189 [2024-11-26 15:30:05.515304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.189 [2024-11-26 15:30:05.515377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.190 [2024-11-26 15:30:05.515460] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:07.190 [2024-11-26 15:30:05.515474] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.190 [2024-11-26 15:30:05.515483] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.190 [2024-11-26 15:30:05.515502] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:07.190 BaseBdev1 00:14:07.190 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.190 15:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.130 "name": "raid_bdev1", 00:14:08.130 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:08.130 "strip_size_kb": 64, 00:14:08.130 "state": "online", 00:14:08.130 "raid_level": "raid5f", 00:14:08.130 "superblock": true, 00:14:08.130 "num_base_bdevs": 3, 00:14:08.130 "num_base_bdevs_discovered": 2, 00:14:08.130 "num_base_bdevs_operational": 2, 00:14:08.130 "base_bdevs_list": [ 00:14:08.130 { 00:14:08.130 "name": null, 00:14:08.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.130 "is_configured": false, 00:14:08.130 "data_offset": 0, 00:14:08.130 "data_size": 63488 00:14:08.130 }, 00:14:08.130 { 00:14:08.130 "name": "BaseBdev2", 00:14:08.130 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:08.130 "is_configured": true, 00:14:08.130 "data_offset": 2048, 00:14:08.130 "data_size": 63488 00:14:08.130 }, 00:14:08.130 { 00:14:08.130 "name": "BaseBdev3", 00:14:08.130 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:08.130 "is_configured": true, 00:14:08.130 "data_offset": 2048, 00:14:08.130 "data_size": 63488 00:14:08.130 } 00:14:08.130 ] 00:14:08.130 }' 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.130 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.701 "name": "raid_bdev1", 00:14:08.701 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:08.701 "strip_size_kb": 64, 00:14:08.701 "state": "online", 00:14:08.701 "raid_level": "raid5f", 00:14:08.701 "superblock": true, 00:14:08.701 "num_base_bdevs": 3, 00:14:08.701 "num_base_bdevs_discovered": 2, 00:14:08.701 "num_base_bdevs_operational": 2, 00:14:08.701 "base_bdevs_list": [ 00:14:08.701 { 00:14:08.701 "name": null, 00:14:08.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.701 "is_configured": false, 00:14:08.701 "data_offset": 0, 00:14:08.701 "data_size": 63488 00:14:08.701 }, 00:14:08.701 { 00:14:08.701 "name": "BaseBdev2", 00:14:08.701 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:08.701 "is_configured": true, 00:14:08.701 "data_offset": 2048, 00:14:08.701 "data_size": 63488 00:14:08.701 }, 00:14:08.701 { 00:14:08.701 "name": "BaseBdev3", 00:14:08.701 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:08.701 "is_configured": true, 00:14:08.701 "data_offset": 2048, 00:14:08.701 "data_size": 63488 00:14:08.701 } 00:14:08.701 ] 00:14:08.701 }' 00:14:08.701 15:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.701 [2024-11-26 15:30:07.079271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.701 [2024-11-26 15:30:07.079468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:08.701 [2024-11-26 15:30:07.079524] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:08.701 request: 00:14:08.701 { 00:14:08.701 "base_bdev": "BaseBdev1", 00:14:08.701 "raid_bdev": "raid_bdev1", 00:14:08.701 "method": "bdev_raid_add_base_bdev", 00:14:08.701 "req_id": 1 00:14:08.701 } 00:14:08.701 Got JSON-RPC error response 00:14:08.701 response: 00:14:08.701 { 00:14:08.701 "code": -22, 00:14:08.701 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:08.701 } 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.701 15:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.641 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.900 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.900 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.900 "name": "raid_bdev1", 00:14:09.900 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:09.900 "strip_size_kb": 64, 00:14:09.900 "state": "online", 00:14:09.900 "raid_level": "raid5f", 00:14:09.900 "superblock": true, 00:14:09.900 "num_base_bdevs": 3, 00:14:09.900 "num_base_bdevs_discovered": 2, 00:14:09.900 "num_base_bdevs_operational": 2, 00:14:09.900 "base_bdevs_list": [ 00:14:09.901 { 00:14:09.901 "name": null, 00:14:09.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.901 "is_configured": false, 00:14:09.901 "data_offset": 0, 00:14:09.901 "data_size": 63488 00:14:09.901 }, 00:14:09.901 { 00:14:09.901 "name": "BaseBdev2", 00:14:09.901 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:09.901 "is_configured": true, 00:14:09.901 "data_offset": 2048, 00:14:09.901 "data_size": 63488 00:14:09.901 }, 00:14:09.901 { 00:14:09.901 "name": "BaseBdev3", 00:14:09.901 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:09.901 "is_configured": true, 00:14:09.901 "data_offset": 2048, 00:14:09.901 "data_size": 63488 00:14:09.901 } 00:14:09.901 ] 00:14:09.901 }' 00:14:09.901 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.901 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.160 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.160 "name": "raid_bdev1", 00:14:10.160 "uuid": "4f0aa938-606c-43df-963c-6b9f8fcd02a8", 00:14:10.160 "strip_size_kb": 64, 00:14:10.160 "state": "online", 00:14:10.160 "raid_level": "raid5f", 00:14:10.160 "superblock": true, 00:14:10.160 "num_base_bdevs": 3, 00:14:10.160 "num_base_bdevs_discovered": 2, 00:14:10.160 "num_base_bdevs_operational": 2, 00:14:10.160 "base_bdevs_list": [ 00:14:10.160 { 00:14:10.160 "name": null, 00:14:10.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.160 "is_configured": false, 00:14:10.160 "data_offset": 0, 00:14:10.160 "data_size": 63488 00:14:10.160 }, 00:14:10.160 { 00:14:10.160 "name": "BaseBdev2", 00:14:10.161 "uuid": "b3f4c356-eda8-5680-940c-e87220be6fbc", 00:14:10.161 "is_configured": true, 00:14:10.161 "data_offset": 2048, 00:14:10.161 "data_size": 63488 00:14:10.161 }, 00:14:10.161 { 00:14:10.161 "name": "BaseBdev3", 00:14:10.161 "uuid": "43be89ea-31d6-50ba-a584-5ee2ebd5503a", 00:14:10.161 "is_configured": true, 00:14:10.161 "data_offset": 2048, 00:14:10.161 "data_size": 63488 00:14:10.161 } 00:14:10.161 ] 00:14:10.161 }' 00:14:10.161 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.161 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.161 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.420 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.420 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 94015 00:14:10.420 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94015 ']' 00:14:10.420 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 94015 00:14:10.420 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:10.420 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.420 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94015 00:14:10.420 killing process with pid 94015 00:14:10.420 Received shutdown signal, test time was about 60.000000 seconds 00:14:10.420 00:14:10.420 Latency(us) 00:14:10.421 [2024-11-26T15:30:08.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.421 [2024-11-26T15:30:08.900Z] =================================================================================================================== 00:14:10.421 [2024-11-26T15:30:08.900Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:10.421 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.421 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.421 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94015' 00:14:10.421 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 94015 00:14:10.421 [2024-11-26 15:30:08.686527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.421 [2024-11-26 15:30:08.686649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.421 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 94015 00:14:10.421 [2024-11-26 15:30:08.686710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.421 [2024-11-26 15:30:08.686721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:10.421 [2024-11-26 15:30:08.727156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.680 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:10.680 00:14:10.680 real 0m21.439s 00:14:10.680 user 0m27.951s 00:14:10.680 sys 0m2.590s 00:14:10.680 ************************************ 00:14:10.681 END TEST raid5f_rebuild_test_sb 00:14:10.681 ************************************ 00:14:10.681 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.681 15:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.681 15:30:08 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:10.681 15:30:08 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:10.681 15:30:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:10.681 15:30:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.681 15:30:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.681 ************************************ 00:14:10.681 START TEST raid5f_state_function_test 00:14:10.681 ************************************ 00:14:10.681 15:30:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:14:10.681 15:30:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=94752 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94752' 00:14:10.681 Process raid pid: 94752 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 94752 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 94752 ']' 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.681 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.681 [2024-11-26 15:30:09.096917] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:10.681 [2024-11-26 15:30:09.097122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.941 [2024-11-26 15:30:09.233115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:10.941 [2024-11-26 15:30:09.270219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.941 [2024-11-26 15:30:09.296511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.941 [2024-11-26 15:30:09.337678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.941 [2024-11-26 15:30:09.337791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.511 [2024-11-26 15:30:09.943801] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:11.511 [2024-11-26 15:30:09.943845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:11.511 [2024-11-26 15:30:09.943864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.511 [2024-11-26 15:30:09.943872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.511 [2024-11-26 15:30:09.943882] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.511 [2024-11-26 15:30:09.943888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.511 [2024-11-26 15:30:09.943896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:11.511 [2024-11-26 15:30:09.943902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.511 15:30:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.771 15:30:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.771 "name": "Existed_Raid", 00:14:11.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.771 "strip_size_kb": 64, 00:14:11.771 "state": "configuring", 00:14:11.771 "raid_level": "raid5f", 00:14:11.771 "superblock": false, 00:14:11.771 "num_base_bdevs": 4, 00:14:11.771 "num_base_bdevs_discovered": 0, 00:14:11.771 "num_base_bdevs_operational": 4, 00:14:11.771 "base_bdevs_list": [ 00:14:11.771 { 00:14:11.771 "name": "BaseBdev1", 00:14:11.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.771 "is_configured": false, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 0 00:14:11.771 }, 00:14:11.771 { 00:14:11.771 "name": "BaseBdev2", 00:14:11.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.771 "is_configured": false, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 0 00:14:11.771 }, 00:14:11.771 { 00:14:11.771 "name": "BaseBdev3", 00:14:11.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.771 "is_configured": false, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 0 00:14:11.771 }, 00:14:11.771 { 00:14:11.771 "name": "BaseBdev4", 00:14:11.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.771 "is_configured": false, 00:14:11.771 "data_offset": 0, 00:14:11.771 "data_size": 0 00:14:11.771 } 00:14:11.771 ] 00:14:11.771 }' 00:14:11.771 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.771 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.031 [2024-11-26 15:30:10.415813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.031 [2024-11-26 15:30:10.415851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.031 [2024-11-26 15:30:10.423846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.031 [2024-11-26 15:30:10.423884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.031 [2024-11-26 15:30:10.423895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.031 [2024-11-26 15:30:10.423903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.031 [2024-11-26 15:30:10.423910] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.031 [2024-11-26 15:30:10.423917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.031 [2024-11-26 15:30:10.423925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:12.031 [2024-11-26 15:30:10.423931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.031 [2024-11-26 15:30:10.440654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.031 BaseBdev1 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:12.031 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.032 [ 00:14:12.032 { 00:14:12.032 "name": "BaseBdev1", 00:14:12.032 "aliases": [ 00:14:12.032 "70579e9e-416c-4fce-9d9e-74abb478be20" 00:14:12.032 ], 00:14:12.032 "product_name": "Malloc disk", 00:14:12.032 "block_size": 512, 00:14:12.032 "num_blocks": 65536, 00:14:12.032 "uuid": "70579e9e-416c-4fce-9d9e-74abb478be20", 00:14:12.032 "assigned_rate_limits": { 00:14:12.032 "rw_ios_per_sec": 0, 00:14:12.032 "rw_mbytes_per_sec": 0, 00:14:12.032 "r_mbytes_per_sec": 0, 00:14:12.032 "w_mbytes_per_sec": 0 00:14:12.032 }, 00:14:12.032 "claimed": true, 00:14:12.032 "claim_type": "exclusive_write", 00:14:12.032 "zoned": false, 00:14:12.032 "supported_io_types": { 00:14:12.032 "read": true, 00:14:12.032 "write": true, 00:14:12.032 "unmap": true, 00:14:12.032 "flush": true, 00:14:12.032 "reset": true, 00:14:12.032 "nvme_admin": false, 00:14:12.032 "nvme_io": false, 00:14:12.032 "nvme_io_md": false, 00:14:12.032 "write_zeroes": true, 00:14:12.032 "zcopy": true, 00:14:12.032 "get_zone_info": false, 00:14:12.032 "zone_management": false, 00:14:12.032 "zone_append": false, 00:14:12.032 "compare": false, 00:14:12.032 "compare_and_write": false, 00:14:12.032 "abort": true, 00:14:12.032 "seek_hole": false, 00:14:12.032 "seek_data": false, 00:14:12.032 "copy": true, 00:14:12.032 "nvme_iov_md": false 00:14:12.032 }, 00:14:12.032 "memory_domains": [ 00:14:12.032 { 00:14:12.032 "dma_device_id": "system", 00:14:12.032 "dma_device_type": 1 00:14:12.032 }, 00:14:12.032 { 00:14:12.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.032 "dma_device_type": 2 00:14:12.032 } 00:14:12.032 ], 00:14:12.032 "driver_specific": {} 00:14:12.032 } 00:14:12.032 ] 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.032 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.292 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.292 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.292 "name": "Existed_Raid", 00:14:12.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.292 "strip_size_kb": 64, 00:14:12.292 "state": "configuring", 00:14:12.292 "raid_level": "raid5f", 00:14:12.292 "superblock": false, 00:14:12.292 "num_base_bdevs": 4, 00:14:12.292 "num_base_bdevs_discovered": 1, 00:14:12.292 "num_base_bdevs_operational": 4, 00:14:12.292 "base_bdevs_list": [ 00:14:12.292 { 00:14:12.292 "name": "BaseBdev1", 00:14:12.292 "uuid": "70579e9e-416c-4fce-9d9e-74abb478be20", 00:14:12.292 "is_configured": true, 00:14:12.292 "data_offset": 0, 00:14:12.292 "data_size": 65536 00:14:12.292 }, 00:14:12.292 { 00:14:12.292 "name": "BaseBdev2", 00:14:12.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.292 "is_configured": false, 00:14:12.292 "data_offset": 0, 00:14:12.292 "data_size": 0 00:14:12.292 }, 00:14:12.292 { 00:14:12.292 "name": "BaseBdev3", 00:14:12.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.292 "is_configured": false, 00:14:12.292 "data_offset": 0, 00:14:12.292 "data_size": 0 00:14:12.292 }, 00:14:12.292 { 00:14:12.292 "name": "BaseBdev4", 00:14:12.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.292 "is_configured": false, 00:14:12.292 "data_offset": 0, 00:14:12.292 "data_size": 0 00:14:12.292 } 00:14:12.292 ] 00:14:12.292 }' 00:14:12.292 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.292 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.553 [2024-11-26 15:30:10.908787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.553 [2024-11-26 15:30:10.908882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.553 [2024-11-26 15:30:10.920833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.553 [2024-11-26 15:30:10.922643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.553 [2024-11-26 15:30:10.922730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.553 [2024-11-26 15:30:10.922745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.553 [2024-11-26 15:30:10.922752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.553 [2024-11-26 15:30:10.922759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:12.553 [2024-11-26 15:30:10.922766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.553 "name": "Existed_Raid", 00:14:12.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.553 "strip_size_kb": 64, 00:14:12.553 "state": "configuring", 00:14:12.553 "raid_level": "raid5f", 00:14:12.553 "superblock": false, 00:14:12.553 "num_base_bdevs": 4, 00:14:12.553 "num_base_bdevs_discovered": 1, 00:14:12.553 "num_base_bdevs_operational": 4, 00:14:12.553 "base_bdevs_list": [ 00:14:12.553 { 00:14:12.553 "name": "BaseBdev1", 00:14:12.553 "uuid": "70579e9e-416c-4fce-9d9e-74abb478be20", 00:14:12.553 "is_configured": true, 00:14:12.553 "data_offset": 0, 00:14:12.553 "data_size": 65536 00:14:12.553 }, 00:14:12.553 { 00:14:12.553 "name": "BaseBdev2", 00:14:12.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.553 "is_configured": false, 00:14:12.553 "data_offset": 0, 00:14:12.553 "data_size": 0 00:14:12.553 }, 00:14:12.553 { 00:14:12.553 "name": "BaseBdev3", 00:14:12.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.553 "is_configured": false, 00:14:12.553 "data_offset": 0, 00:14:12.553 "data_size": 0 00:14:12.553 }, 00:14:12.553 { 00:14:12.553 "name": "BaseBdev4", 00:14:12.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.553 "is_configured": false, 00:14:12.553 "data_offset": 0, 00:14:12.553 "data_size": 0 00:14:12.553 } 00:14:12.553 ] 00:14:12.553 }' 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.553 15:30:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 [2024-11-26 15:30:11.352100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.127 BaseBdev2 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 [ 00:14:13.127 { 00:14:13.127 "name": "BaseBdev2", 00:14:13.127 "aliases": [ 00:14:13.127 "47f3d80d-52d7-4c18-9d14-a0845ddaa1e4" 00:14:13.127 ], 00:14:13.127 "product_name": "Malloc disk", 00:14:13.127 "block_size": 512, 00:14:13.127 "num_blocks": 65536, 00:14:13.127 "uuid": "47f3d80d-52d7-4c18-9d14-a0845ddaa1e4", 00:14:13.127 "assigned_rate_limits": { 00:14:13.127 "rw_ios_per_sec": 0, 00:14:13.127 "rw_mbytes_per_sec": 0, 00:14:13.127 "r_mbytes_per_sec": 0, 00:14:13.127 "w_mbytes_per_sec": 0 00:14:13.127 }, 00:14:13.127 "claimed": true, 00:14:13.127 "claim_type": "exclusive_write", 00:14:13.127 "zoned": false, 00:14:13.127 "supported_io_types": { 00:14:13.127 "read": true, 00:14:13.127 "write": true, 00:14:13.127 "unmap": true, 00:14:13.127 "flush": true, 00:14:13.127 "reset": true, 00:14:13.127 "nvme_admin": false, 00:14:13.127 "nvme_io": false, 00:14:13.127 "nvme_io_md": false, 00:14:13.127 "write_zeroes": true, 00:14:13.127 "zcopy": true, 00:14:13.127 "get_zone_info": false, 00:14:13.127 "zone_management": false, 00:14:13.127 "zone_append": false, 00:14:13.127 "compare": false, 00:14:13.127 "compare_and_write": false, 00:14:13.127 "abort": true, 00:14:13.127 "seek_hole": false, 00:14:13.127 "seek_data": false, 00:14:13.127 "copy": true, 00:14:13.127 "nvme_iov_md": false 00:14:13.127 }, 00:14:13.127 "memory_domains": [ 00:14:13.127 { 00:14:13.127 "dma_device_id": "system", 00:14:13.127 "dma_device_type": 1 00:14:13.127 }, 00:14:13.127 { 00:14:13.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.127 "dma_device_type": 2 00:14:13.127 } 00:14:13.127 ], 00:14:13.127 "driver_specific": {} 00:14:13.127 } 00:14:13.127 ] 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.127 "name": "Existed_Raid", 00:14:13.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.127 "strip_size_kb": 64, 00:14:13.127 "state": "configuring", 00:14:13.127 "raid_level": "raid5f", 00:14:13.127 "superblock": false, 00:14:13.127 "num_base_bdevs": 4, 00:14:13.127 "num_base_bdevs_discovered": 2, 00:14:13.127 "num_base_bdevs_operational": 4, 00:14:13.127 "base_bdevs_list": [ 00:14:13.127 { 00:14:13.127 "name": "BaseBdev1", 00:14:13.127 "uuid": "70579e9e-416c-4fce-9d9e-74abb478be20", 00:14:13.127 "is_configured": true, 00:14:13.127 "data_offset": 0, 00:14:13.127 "data_size": 65536 00:14:13.127 }, 00:14:13.127 { 00:14:13.127 "name": "BaseBdev2", 00:14:13.127 "uuid": "47f3d80d-52d7-4c18-9d14-a0845ddaa1e4", 00:14:13.127 "is_configured": true, 00:14:13.127 "data_offset": 0, 00:14:13.127 "data_size": 65536 00:14:13.127 }, 00:14:13.127 { 00:14:13.127 "name": "BaseBdev3", 00:14:13.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.127 "is_configured": false, 00:14:13.127 "data_offset": 0, 00:14:13.127 "data_size": 0 00:14:13.127 }, 00:14:13.127 { 00:14:13.127 "name": "BaseBdev4", 00:14:13.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.127 "is_configured": false, 00:14:13.127 "data_offset": 0, 00:14:13.127 "data_size": 0 00:14:13.127 } 00:14:13.127 ] 00:14:13.127 }' 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.127 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 [2024-11-26 15:30:11.866858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.420 BaseBdev3 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.420 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.696 [ 00:14:13.696 { 00:14:13.696 "name": "BaseBdev3", 00:14:13.696 "aliases": [ 00:14:13.696 "a779be0c-c3d6-4767-b0be-a2cc1ddf59bb" 00:14:13.696 ], 00:14:13.696 "product_name": "Malloc disk", 00:14:13.696 "block_size": 512, 00:14:13.696 "num_blocks": 65536, 00:14:13.696 "uuid": "a779be0c-c3d6-4767-b0be-a2cc1ddf59bb", 00:14:13.696 "assigned_rate_limits": { 00:14:13.696 "rw_ios_per_sec": 0, 00:14:13.696 "rw_mbytes_per_sec": 0, 00:14:13.696 "r_mbytes_per_sec": 0, 00:14:13.696 "w_mbytes_per_sec": 0 00:14:13.696 }, 00:14:13.696 "claimed": true, 00:14:13.696 "claim_type": "exclusive_write", 00:14:13.696 "zoned": false, 00:14:13.696 "supported_io_types": { 00:14:13.696 "read": true, 00:14:13.696 "write": true, 00:14:13.696 "unmap": true, 00:14:13.696 "flush": true, 00:14:13.696 "reset": true, 00:14:13.696 "nvme_admin": false, 00:14:13.696 "nvme_io": false, 00:14:13.696 "nvme_io_md": false, 00:14:13.696 "write_zeroes": true, 00:14:13.696 "zcopy": true, 00:14:13.696 "get_zone_info": false, 00:14:13.696 "zone_management": false, 00:14:13.696 "zone_append": false, 00:14:13.696 "compare": false, 00:14:13.696 "compare_and_write": false, 00:14:13.696 "abort": true, 00:14:13.696 "seek_hole": false, 00:14:13.696 "seek_data": false, 00:14:13.696 "copy": true, 00:14:13.696 "nvme_iov_md": false 00:14:13.696 }, 00:14:13.696 "memory_domains": [ 00:14:13.696 { 00:14:13.696 "dma_device_id": "system", 00:14:13.696 "dma_device_type": 1 00:14:13.696 }, 00:14:13.696 { 00:14:13.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.696 "dma_device_type": 2 00:14:13.696 } 00:14:13.696 ], 00:14:13.696 "driver_specific": {} 00:14:13.696 } 00:14:13.696 ] 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.696 "name": "Existed_Raid", 00:14:13.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.696 "strip_size_kb": 64, 00:14:13.696 "state": "configuring", 00:14:13.696 "raid_level": "raid5f", 00:14:13.696 "superblock": false, 00:14:13.696 "num_base_bdevs": 4, 00:14:13.696 "num_base_bdevs_discovered": 3, 00:14:13.696 "num_base_bdevs_operational": 4, 00:14:13.696 "base_bdevs_list": [ 00:14:13.696 { 00:14:13.696 "name": "BaseBdev1", 00:14:13.696 "uuid": "70579e9e-416c-4fce-9d9e-74abb478be20", 00:14:13.696 "is_configured": true, 00:14:13.696 "data_offset": 0, 00:14:13.696 "data_size": 65536 00:14:13.696 }, 00:14:13.696 { 00:14:13.696 "name": "BaseBdev2", 00:14:13.696 "uuid": "47f3d80d-52d7-4c18-9d14-a0845ddaa1e4", 00:14:13.696 "is_configured": true, 00:14:13.696 "data_offset": 0, 00:14:13.696 "data_size": 65536 00:14:13.696 }, 00:14:13.696 { 00:14:13.696 "name": "BaseBdev3", 00:14:13.696 "uuid": "a779be0c-c3d6-4767-b0be-a2cc1ddf59bb", 00:14:13.696 "is_configured": true, 00:14:13.696 "data_offset": 0, 00:14:13.696 "data_size": 65536 00:14:13.696 }, 00:14:13.696 { 00:14:13.696 "name": "BaseBdev4", 00:14:13.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.696 "is_configured": false, 00:14:13.696 "data_offset": 0, 00:14:13.696 "data_size": 0 00:14:13.696 } 00:14:13.696 ] 00:14:13.696 }' 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.696 15:30:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.957 BaseBdev4 00:14:13.957 [2024-11-26 15:30:12.333961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.957 [2024-11-26 15:30:12.334019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:13.957 [2024-11-26 15:30:12.334029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:13.957 [2024-11-26 15:30:12.334327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:13.957 [2024-11-26 15:30:12.334782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:13.957 [2024-11-26 15:30:12.334793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:13.957 [2024-11-26 15:30:12.335003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.957 [ 00:14:13.957 { 00:14:13.957 "name": "BaseBdev4", 00:14:13.957 "aliases": [ 00:14:13.957 "c3558c26-2786-4331-ba69-84a3991af93a" 00:14:13.957 ], 00:14:13.957 "product_name": "Malloc disk", 00:14:13.957 "block_size": 512, 00:14:13.957 "num_blocks": 65536, 00:14:13.957 "uuid": "c3558c26-2786-4331-ba69-84a3991af93a", 00:14:13.957 "assigned_rate_limits": { 00:14:13.957 "rw_ios_per_sec": 0, 00:14:13.957 "rw_mbytes_per_sec": 0, 00:14:13.957 "r_mbytes_per_sec": 0, 00:14:13.957 "w_mbytes_per_sec": 0 00:14:13.957 }, 00:14:13.957 "claimed": true, 00:14:13.957 "claim_type": "exclusive_write", 00:14:13.957 "zoned": false, 00:14:13.957 "supported_io_types": { 00:14:13.957 "read": true, 00:14:13.957 "write": true, 00:14:13.957 "unmap": true, 00:14:13.957 "flush": true, 00:14:13.957 "reset": true, 00:14:13.957 "nvme_admin": false, 00:14:13.957 "nvme_io": false, 00:14:13.957 "nvme_io_md": false, 00:14:13.957 "write_zeroes": true, 00:14:13.957 "zcopy": true, 00:14:13.957 "get_zone_info": false, 00:14:13.957 "zone_management": false, 00:14:13.957 "zone_append": false, 00:14:13.957 "compare": false, 00:14:13.957 "compare_and_write": false, 00:14:13.957 "abort": true, 00:14:13.957 "seek_hole": false, 00:14:13.957 "seek_data": false, 00:14:13.957 "copy": true, 00:14:13.957 "nvme_iov_md": false 00:14:13.957 }, 00:14:13.957 "memory_domains": [ 00:14:13.957 { 00:14:13.957 "dma_device_id": "system", 00:14:13.957 "dma_device_type": 1 00:14:13.957 }, 00:14:13.957 { 00:14:13.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.957 "dma_device_type": 2 00:14:13.957 } 00:14:13.957 ], 00:14:13.957 "driver_specific": {} 00:14:13.957 } 00:14:13.957 ] 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.957 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.957 "name": "Existed_Raid", 00:14:13.957 "uuid": "4328c405-a80e-409d-a386-fa6384a466d4", 00:14:13.957 "strip_size_kb": 64, 00:14:13.957 "state": "online", 00:14:13.957 "raid_level": "raid5f", 00:14:13.957 "superblock": false, 00:14:13.957 "num_base_bdevs": 4, 00:14:13.957 "num_base_bdevs_discovered": 4, 00:14:13.957 "num_base_bdevs_operational": 4, 00:14:13.957 "base_bdevs_list": [ 00:14:13.957 { 00:14:13.957 "name": "BaseBdev1", 00:14:13.957 "uuid": "70579e9e-416c-4fce-9d9e-74abb478be20", 00:14:13.957 "is_configured": true, 00:14:13.957 "data_offset": 0, 00:14:13.957 "data_size": 65536 00:14:13.957 }, 00:14:13.957 { 00:14:13.957 "name": "BaseBdev2", 00:14:13.957 "uuid": "47f3d80d-52d7-4c18-9d14-a0845ddaa1e4", 00:14:13.957 "is_configured": true, 00:14:13.957 "data_offset": 0, 00:14:13.957 "data_size": 65536 00:14:13.957 }, 00:14:13.957 { 00:14:13.957 "name": "BaseBdev3", 00:14:13.957 "uuid": "a779be0c-c3d6-4767-b0be-a2cc1ddf59bb", 00:14:13.957 "is_configured": true, 00:14:13.957 "data_offset": 0, 00:14:13.958 "data_size": 65536 00:14:13.958 }, 00:14:13.958 { 00:14:13.958 "name": "BaseBdev4", 00:14:13.958 "uuid": "c3558c26-2786-4331-ba69-84a3991af93a", 00:14:13.958 "is_configured": true, 00:14:13.958 "data_offset": 0, 00:14:13.958 "data_size": 65536 00:14:13.958 } 00:14:13.958 ] 00:14:13.958 }' 00:14:13.958 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.958 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.527 [2024-11-26 15:30:12.814320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.527 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.527 "name": "Existed_Raid", 00:14:14.527 "aliases": [ 00:14:14.527 "4328c405-a80e-409d-a386-fa6384a466d4" 00:14:14.527 ], 00:14:14.527 "product_name": "Raid Volume", 00:14:14.527 "block_size": 512, 00:14:14.527 "num_blocks": 196608, 00:14:14.527 "uuid": "4328c405-a80e-409d-a386-fa6384a466d4", 00:14:14.527 "assigned_rate_limits": { 00:14:14.527 "rw_ios_per_sec": 0, 00:14:14.527 "rw_mbytes_per_sec": 0, 00:14:14.527 "r_mbytes_per_sec": 0, 00:14:14.527 "w_mbytes_per_sec": 0 00:14:14.527 }, 00:14:14.527 "claimed": false, 00:14:14.527 "zoned": false, 00:14:14.527 "supported_io_types": { 00:14:14.527 "read": true, 00:14:14.527 "write": true, 00:14:14.527 "unmap": false, 00:14:14.527 "flush": false, 00:14:14.527 "reset": true, 00:14:14.527 "nvme_admin": false, 00:14:14.527 "nvme_io": false, 00:14:14.527 "nvme_io_md": false, 00:14:14.527 "write_zeroes": true, 00:14:14.527 "zcopy": false, 00:14:14.527 "get_zone_info": false, 00:14:14.527 "zone_management": false, 00:14:14.527 "zone_append": false, 00:14:14.527 "compare": false, 00:14:14.527 "compare_and_write": false, 00:14:14.527 "abort": false, 00:14:14.527 "seek_hole": false, 00:14:14.527 "seek_data": false, 00:14:14.527 "copy": false, 00:14:14.527 "nvme_iov_md": false 00:14:14.527 }, 00:14:14.527 "driver_specific": { 00:14:14.527 "raid": { 00:14:14.527 "uuid": "4328c405-a80e-409d-a386-fa6384a466d4", 00:14:14.528 "strip_size_kb": 64, 00:14:14.528 "state": "online", 00:14:14.528 "raid_level": "raid5f", 00:14:14.528 "superblock": false, 00:14:14.528 "num_base_bdevs": 4, 00:14:14.528 "num_base_bdevs_discovered": 4, 00:14:14.528 "num_base_bdevs_operational": 4, 00:14:14.528 "base_bdevs_list": [ 00:14:14.528 { 00:14:14.528 "name": "BaseBdev1", 00:14:14.528 "uuid": "70579e9e-416c-4fce-9d9e-74abb478be20", 00:14:14.528 "is_configured": true, 00:14:14.528 "data_offset": 0, 00:14:14.528 "data_size": 65536 00:14:14.528 }, 00:14:14.528 { 00:14:14.528 "name": "BaseBdev2", 00:14:14.528 "uuid": "47f3d80d-52d7-4c18-9d14-a0845ddaa1e4", 00:14:14.528 "is_configured": true, 00:14:14.528 "data_offset": 0, 00:14:14.528 "data_size": 65536 00:14:14.528 }, 00:14:14.528 { 00:14:14.528 "name": "BaseBdev3", 00:14:14.528 "uuid": "a779be0c-c3d6-4767-b0be-a2cc1ddf59bb", 00:14:14.528 "is_configured": true, 00:14:14.528 "data_offset": 0, 00:14:14.528 "data_size": 65536 00:14:14.528 }, 00:14:14.528 { 00:14:14.528 "name": "BaseBdev4", 00:14:14.528 "uuid": "c3558c26-2786-4331-ba69-84a3991af93a", 00:14:14.528 "is_configured": true, 00:14:14.528 "data_offset": 0, 00:14:14.528 "data_size": 65536 00:14:14.528 } 00:14:14.528 ] 00:14:14.528 } 00:14:14.528 } 00:14:14.528 }' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:14.528 BaseBdev2 00:14:14.528 BaseBdev3 00:14:14.528 BaseBdev4' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.528 15:30:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:14.528 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.528 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 [2024-11-26 15:30:13.138268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.788 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.788 "name": "Existed_Raid", 00:14:14.788 "uuid": "4328c405-a80e-409d-a386-fa6384a466d4", 00:14:14.788 "strip_size_kb": 64, 00:14:14.788 "state": "online", 00:14:14.788 "raid_level": "raid5f", 00:14:14.788 "superblock": false, 00:14:14.788 "num_base_bdevs": 4, 00:14:14.788 "num_base_bdevs_discovered": 3, 00:14:14.788 "num_base_bdevs_operational": 3, 00:14:14.788 "base_bdevs_list": [ 00:14:14.788 { 00:14:14.788 "name": null, 00:14:14.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.788 "is_configured": false, 00:14:14.788 "data_offset": 0, 00:14:14.788 "data_size": 65536 00:14:14.788 }, 00:14:14.788 { 00:14:14.788 "name": "BaseBdev2", 00:14:14.788 "uuid": "47f3d80d-52d7-4c18-9d14-a0845ddaa1e4", 00:14:14.788 "is_configured": true, 00:14:14.788 "data_offset": 0, 00:14:14.788 "data_size": 65536 00:14:14.788 }, 00:14:14.788 { 00:14:14.788 "name": "BaseBdev3", 00:14:14.788 "uuid": "a779be0c-c3d6-4767-b0be-a2cc1ddf59bb", 00:14:14.788 "is_configured": true, 00:14:14.788 "data_offset": 0, 00:14:14.788 "data_size": 65536 00:14:14.788 }, 00:14:14.788 { 00:14:14.788 "name": "BaseBdev4", 00:14:14.788 "uuid": "c3558c26-2786-4331-ba69-84a3991af93a", 00:14:14.789 "is_configured": true, 00:14:14.789 "data_offset": 0, 00:14:14.789 "data_size": 65536 00:14:14.789 } 00:14:14.789 ] 00:14:14.789 }' 00:14:14.789 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.789 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.359 [2024-11-26 15:30:13.613660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.359 [2024-11-26 15:30:13.613824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.359 [2024-11-26 15:30:13.625257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.359 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 [2024-11-26 15:30:13.669295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 [2024-11-26 15:30:13.736615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:15.360 [2024-11-26 15:30:13.736704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 BaseBdev2 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.360 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.360 [ 00:14:15.360 { 00:14:15.360 "name": "BaseBdev2", 00:14:15.360 "aliases": [ 00:14:15.360 "a63a8a34-73e7-4238-bed8-7dbb34eed9c3" 00:14:15.360 ], 00:14:15.360 "product_name": "Malloc disk", 00:14:15.360 "block_size": 512, 00:14:15.360 "num_blocks": 65536, 00:14:15.360 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:15.360 "assigned_rate_limits": { 00:14:15.360 "rw_ios_per_sec": 0, 00:14:15.360 "rw_mbytes_per_sec": 0, 00:14:15.360 "r_mbytes_per_sec": 0, 00:14:15.360 "w_mbytes_per_sec": 0 00:14:15.360 }, 00:14:15.360 "claimed": false, 00:14:15.360 "zoned": false, 00:14:15.360 "supported_io_types": { 00:14:15.360 "read": true, 00:14:15.621 "write": true, 00:14:15.621 "unmap": true, 00:14:15.621 "flush": true, 00:14:15.621 "reset": true, 00:14:15.621 "nvme_admin": false, 00:14:15.621 "nvme_io": false, 00:14:15.621 "nvme_io_md": false, 00:14:15.621 "write_zeroes": true, 00:14:15.621 "zcopy": true, 00:14:15.621 "get_zone_info": false, 00:14:15.621 "zone_management": false, 00:14:15.621 "zone_append": false, 00:14:15.621 "compare": false, 00:14:15.621 "compare_and_write": false, 00:14:15.621 "abort": true, 00:14:15.621 "seek_hole": false, 00:14:15.621 "seek_data": false, 00:14:15.621 "copy": true, 00:14:15.621 "nvme_iov_md": false 00:14:15.621 }, 00:14:15.621 "memory_domains": [ 00:14:15.621 { 00:14:15.621 "dma_device_id": "system", 00:14:15.621 "dma_device_type": 1 00:14:15.621 }, 00:14:15.621 { 00:14:15.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.621 "dma_device_type": 2 00:14:15.621 } 00:14:15.621 ], 00:14:15.621 "driver_specific": {} 00:14:15.621 } 00:14:15.621 ] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.621 BaseBdev3 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.621 [ 00:14:15.621 { 00:14:15.621 "name": "BaseBdev3", 00:14:15.621 "aliases": [ 00:14:15.621 "abf6ca6a-57ed-4623-893c-e0c15e2edb94" 00:14:15.621 ], 00:14:15.621 "product_name": "Malloc disk", 00:14:15.621 "block_size": 512, 00:14:15.621 "num_blocks": 65536, 00:14:15.621 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:15.621 "assigned_rate_limits": { 00:14:15.621 "rw_ios_per_sec": 0, 00:14:15.621 "rw_mbytes_per_sec": 0, 00:14:15.621 "r_mbytes_per_sec": 0, 00:14:15.621 "w_mbytes_per_sec": 0 00:14:15.621 }, 00:14:15.621 "claimed": false, 00:14:15.621 "zoned": false, 00:14:15.621 "supported_io_types": { 00:14:15.621 "read": true, 00:14:15.621 "write": true, 00:14:15.621 "unmap": true, 00:14:15.621 "flush": true, 00:14:15.621 "reset": true, 00:14:15.621 "nvme_admin": false, 00:14:15.621 "nvme_io": false, 00:14:15.621 "nvme_io_md": false, 00:14:15.621 "write_zeroes": true, 00:14:15.621 "zcopy": true, 00:14:15.621 "get_zone_info": false, 00:14:15.621 "zone_management": false, 00:14:15.621 "zone_append": false, 00:14:15.621 "compare": false, 00:14:15.621 "compare_and_write": false, 00:14:15.621 "abort": true, 00:14:15.621 "seek_hole": false, 00:14:15.621 "seek_data": false, 00:14:15.621 "copy": true, 00:14:15.621 "nvme_iov_md": false 00:14:15.621 }, 00:14:15.621 "memory_domains": [ 00:14:15.621 { 00:14:15.621 "dma_device_id": "system", 00:14:15.621 "dma_device_type": 1 00:14:15.621 }, 00:14:15.621 { 00:14:15.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.621 "dma_device_type": 2 00:14:15.621 } 00:14:15.621 ], 00:14:15.621 "driver_specific": {} 00:14:15.621 } 00:14:15.621 ] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.621 BaseBdev4 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.621 [ 00:14:15.621 { 00:14:15.621 "name": "BaseBdev4", 00:14:15.621 "aliases": [ 00:14:15.621 "65c75b05-a37e-4659-83c5-e3c9ef024363" 00:14:15.621 ], 00:14:15.621 "product_name": "Malloc disk", 00:14:15.621 "block_size": 512, 00:14:15.621 "num_blocks": 65536, 00:14:15.621 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:15.621 "assigned_rate_limits": { 00:14:15.621 "rw_ios_per_sec": 0, 00:14:15.621 "rw_mbytes_per_sec": 0, 00:14:15.621 "r_mbytes_per_sec": 0, 00:14:15.621 "w_mbytes_per_sec": 0 00:14:15.621 }, 00:14:15.621 "claimed": false, 00:14:15.621 "zoned": false, 00:14:15.621 "supported_io_types": { 00:14:15.621 "read": true, 00:14:15.621 "write": true, 00:14:15.621 "unmap": true, 00:14:15.621 "flush": true, 00:14:15.621 "reset": true, 00:14:15.621 "nvme_admin": false, 00:14:15.621 "nvme_io": false, 00:14:15.621 "nvme_io_md": false, 00:14:15.621 "write_zeroes": true, 00:14:15.621 "zcopy": true, 00:14:15.621 "get_zone_info": false, 00:14:15.621 "zone_management": false, 00:14:15.621 "zone_append": false, 00:14:15.621 "compare": false, 00:14:15.621 "compare_and_write": false, 00:14:15.621 "abort": true, 00:14:15.621 "seek_hole": false, 00:14:15.621 "seek_data": false, 00:14:15.621 "copy": true, 00:14:15.621 "nvme_iov_md": false 00:14:15.621 }, 00:14:15.621 "memory_domains": [ 00:14:15.621 { 00:14:15.621 "dma_device_id": "system", 00:14:15.621 "dma_device_type": 1 00:14:15.621 }, 00:14:15.621 { 00:14:15.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.621 "dma_device_type": 2 00:14:15.621 } 00:14:15.621 ], 00:14:15.621 "driver_specific": {} 00:14:15.621 } 00:14:15.621 ] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.621 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.622 [2024-11-26 15:30:13.952681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.622 [2024-11-26 15:30:13.952770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.622 [2024-11-26 15:30:13.952810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.622 [2024-11-26 15:30:13.954740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.622 [2024-11-26 15:30:13.954828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.622 15:30:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.622 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.622 "name": "Existed_Raid", 00:14:15.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.622 "strip_size_kb": 64, 00:14:15.622 "state": "configuring", 00:14:15.622 "raid_level": "raid5f", 00:14:15.622 "superblock": false, 00:14:15.622 "num_base_bdevs": 4, 00:14:15.622 "num_base_bdevs_discovered": 3, 00:14:15.622 "num_base_bdevs_operational": 4, 00:14:15.622 "base_bdevs_list": [ 00:14:15.622 { 00:14:15.622 "name": "BaseBdev1", 00:14:15.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.622 "is_configured": false, 00:14:15.622 "data_offset": 0, 00:14:15.622 "data_size": 0 00:14:15.622 }, 00:14:15.622 { 00:14:15.622 "name": "BaseBdev2", 00:14:15.622 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:15.622 "is_configured": true, 00:14:15.622 "data_offset": 0, 00:14:15.622 "data_size": 65536 00:14:15.622 }, 00:14:15.622 { 00:14:15.622 "name": "BaseBdev3", 00:14:15.622 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:15.622 "is_configured": true, 00:14:15.622 "data_offset": 0, 00:14:15.622 "data_size": 65536 00:14:15.622 }, 00:14:15.622 { 00:14:15.622 "name": "BaseBdev4", 00:14:15.622 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:15.622 "is_configured": true, 00:14:15.622 "data_offset": 0, 00:14:15.622 "data_size": 65536 00:14:15.622 } 00:14:15.622 ] 00:14:15.622 }' 00:14:15.622 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.622 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.192 [2024-11-26 15:30:14.404809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.192 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.192 "name": "Existed_Raid", 00:14:16.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.192 "strip_size_kb": 64, 00:14:16.192 "state": "configuring", 00:14:16.192 "raid_level": "raid5f", 00:14:16.192 "superblock": false, 00:14:16.192 "num_base_bdevs": 4, 00:14:16.192 "num_base_bdevs_discovered": 2, 00:14:16.192 "num_base_bdevs_operational": 4, 00:14:16.192 "base_bdevs_list": [ 00:14:16.192 { 00:14:16.192 "name": "BaseBdev1", 00:14:16.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.192 "is_configured": false, 00:14:16.193 "data_offset": 0, 00:14:16.193 "data_size": 0 00:14:16.193 }, 00:14:16.193 { 00:14:16.193 "name": null, 00:14:16.193 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:16.193 "is_configured": false, 00:14:16.193 "data_offset": 0, 00:14:16.193 "data_size": 65536 00:14:16.193 }, 00:14:16.193 { 00:14:16.193 "name": "BaseBdev3", 00:14:16.193 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:16.193 "is_configured": true, 00:14:16.193 "data_offset": 0, 00:14:16.193 "data_size": 65536 00:14:16.193 }, 00:14:16.193 { 00:14:16.193 "name": "BaseBdev4", 00:14:16.193 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:16.193 "is_configured": true, 00:14:16.193 "data_offset": 0, 00:14:16.193 "data_size": 65536 00:14:16.193 } 00:14:16.193 ] 00:14:16.193 }' 00:14:16.193 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.193 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.453 [2024-11-26 15:30:14.887807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.453 BaseBdev1 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.453 [ 00:14:16.453 { 00:14:16.453 "name": "BaseBdev1", 00:14:16.453 "aliases": [ 00:14:16.453 "a2438c8b-7128-4d0f-b04f-6b364647fdfd" 00:14:16.453 ], 00:14:16.453 "product_name": "Malloc disk", 00:14:16.453 "block_size": 512, 00:14:16.453 "num_blocks": 65536, 00:14:16.453 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:16.453 "assigned_rate_limits": { 00:14:16.453 "rw_ios_per_sec": 0, 00:14:16.453 "rw_mbytes_per_sec": 0, 00:14:16.453 "r_mbytes_per_sec": 0, 00:14:16.453 "w_mbytes_per_sec": 0 00:14:16.453 }, 00:14:16.453 "claimed": true, 00:14:16.453 "claim_type": "exclusive_write", 00:14:16.453 "zoned": false, 00:14:16.453 "supported_io_types": { 00:14:16.453 "read": true, 00:14:16.453 "write": true, 00:14:16.453 "unmap": true, 00:14:16.453 "flush": true, 00:14:16.453 "reset": true, 00:14:16.453 "nvme_admin": false, 00:14:16.453 "nvme_io": false, 00:14:16.453 "nvme_io_md": false, 00:14:16.453 "write_zeroes": true, 00:14:16.453 "zcopy": true, 00:14:16.453 "get_zone_info": false, 00:14:16.453 "zone_management": false, 00:14:16.453 "zone_append": false, 00:14:16.453 "compare": false, 00:14:16.453 "compare_and_write": false, 00:14:16.453 "abort": true, 00:14:16.453 "seek_hole": false, 00:14:16.453 "seek_data": false, 00:14:16.453 "copy": true, 00:14:16.453 "nvme_iov_md": false 00:14:16.453 }, 00:14:16.453 "memory_domains": [ 00:14:16.453 { 00:14:16.453 "dma_device_id": "system", 00:14:16.453 "dma_device_type": 1 00:14:16.453 }, 00:14:16.453 { 00:14:16.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.453 "dma_device_type": 2 00:14:16.453 } 00:14:16.453 ], 00:14:16.453 "driver_specific": {} 00:14:16.453 } 00:14:16.453 ] 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.453 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.713 "name": "Existed_Raid", 00:14:16.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.713 "strip_size_kb": 64, 00:14:16.713 "state": "configuring", 00:14:16.713 "raid_level": "raid5f", 00:14:16.713 "superblock": false, 00:14:16.713 "num_base_bdevs": 4, 00:14:16.713 "num_base_bdevs_discovered": 3, 00:14:16.713 "num_base_bdevs_operational": 4, 00:14:16.713 "base_bdevs_list": [ 00:14:16.713 { 00:14:16.713 "name": "BaseBdev1", 00:14:16.713 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:16.713 "is_configured": true, 00:14:16.713 "data_offset": 0, 00:14:16.713 "data_size": 65536 00:14:16.713 }, 00:14:16.713 { 00:14:16.713 "name": null, 00:14:16.713 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:16.713 "is_configured": false, 00:14:16.713 "data_offset": 0, 00:14:16.713 "data_size": 65536 00:14:16.713 }, 00:14:16.713 { 00:14:16.713 "name": "BaseBdev3", 00:14:16.713 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:16.713 "is_configured": true, 00:14:16.713 "data_offset": 0, 00:14:16.713 "data_size": 65536 00:14:16.713 }, 00:14:16.713 { 00:14:16.713 "name": "BaseBdev4", 00:14:16.713 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:16.713 "is_configured": true, 00:14:16.713 "data_offset": 0, 00:14:16.713 "data_size": 65536 00:14:16.713 } 00:14:16.713 ] 00:14:16.713 }' 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.713 15:30:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.974 [2024-11-26 15:30:15.379974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.974 "name": "Existed_Raid", 00:14:16.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.974 "strip_size_kb": 64, 00:14:16.974 "state": "configuring", 00:14:16.974 "raid_level": "raid5f", 00:14:16.974 "superblock": false, 00:14:16.974 "num_base_bdevs": 4, 00:14:16.974 "num_base_bdevs_discovered": 2, 00:14:16.974 "num_base_bdevs_operational": 4, 00:14:16.974 "base_bdevs_list": [ 00:14:16.974 { 00:14:16.974 "name": "BaseBdev1", 00:14:16.974 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:16.974 "is_configured": true, 00:14:16.974 "data_offset": 0, 00:14:16.974 "data_size": 65536 00:14:16.974 }, 00:14:16.974 { 00:14:16.974 "name": null, 00:14:16.974 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:16.974 "is_configured": false, 00:14:16.974 "data_offset": 0, 00:14:16.974 "data_size": 65536 00:14:16.974 }, 00:14:16.974 { 00:14:16.974 "name": null, 00:14:16.974 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:16.974 "is_configured": false, 00:14:16.974 "data_offset": 0, 00:14:16.974 "data_size": 65536 00:14:16.974 }, 00:14:16.974 { 00:14:16.974 "name": "BaseBdev4", 00:14:16.974 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:16.974 "is_configured": true, 00:14:16.974 "data_offset": 0, 00:14:16.974 "data_size": 65536 00:14:16.974 } 00:14:16.974 ] 00:14:16.974 }' 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.974 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 [2024-11-26 15:30:15.840154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.544 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.544 "name": "Existed_Raid", 00:14:17.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.544 "strip_size_kb": 64, 00:14:17.544 "state": "configuring", 00:14:17.544 "raid_level": "raid5f", 00:14:17.544 "superblock": false, 00:14:17.544 "num_base_bdevs": 4, 00:14:17.544 "num_base_bdevs_discovered": 3, 00:14:17.544 "num_base_bdevs_operational": 4, 00:14:17.544 "base_bdevs_list": [ 00:14:17.544 { 00:14:17.544 "name": "BaseBdev1", 00:14:17.544 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:17.544 "is_configured": true, 00:14:17.544 "data_offset": 0, 00:14:17.544 "data_size": 65536 00:14:17.544 }, 00:14:17.544 { 00:14:17.544 "name": null, 00:14:17.544 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:17.544 "is_configured": false, 00:14:17.544 "data_offset": 0, 00:14:17.544 "data_size": 65536 00:14:17.544 }, 00:14:17.544 { 00:14:17.544 "name": "BaseBdev3", 00:14:17.544 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:17.544 "is_configured": true, 00:14:17.544 "data_offset": 0, 00:14:17.544 "data_size": 65536 00:14:17.544 }, 00:14:17.544 { 00:14:17.545 "name": "BaseBdev4", 00:14:17.545 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:17.545 "is_configured": true, 00:14:17.545 "data_offset": 0, 00:14:17.545 "data_size": 65536 00:14:17.545 } 00:14:17.545 ] 00:14:17.545 }' 00:14:17.545 15:30:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.545 15:30:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.804 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.804 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.804 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.804 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.804 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 [2024-11-26 15:30:16.300290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.065 "name": "Existed_Raid", 00:14:18.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.065 "strip_size_kb": 64, 00:14:18.065 "state": "configuring", 00:14:18.065 "raid_level": "raid5f", 00:14:18.065 "superblock": false, 00:14:18.065 "num_base_bdevs": 4, 00:14:18.065 "num_base_bdevs_discovered": 2, 00:14:18.065 "num_base_bdevs_operational": 4, 00:14:18.065 "base_bdevs_list": [ 00:14:18.065 { 00:14:18.065 "name": null, 00:14:18.065 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:18.065 "is_configured": false, 00:14:18.065 "data_offset": 0, 00:14:18.065 "data_size": 65536 00:14:18.065 }, 00:14:18.065 { 00:14:18.065 "name": null, 00:14:18.065 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:18.065 "is_configured": false, 00:14:18.065 "data_offset": 0, 00:14:18.065 "data_size": 65536 00:14:18.065 }, 00:14:18.065 { 00:14:18.065 "name": "BaseBdev3", 00:14:18.065 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:18.065 "is_configured": true, 00:14:18.065 "data_offset": 0, 00:14:18.065 "data_size": 65536 00:14:18.065 }, 00:14:18.065 { 00:14:18.065 "name": "BaseBdev4", 00:14:18.065 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:18.065 "is_configured": true, 00:14:18.065 "data_offset": 0, 00:14:18.065 "data_size": 65536 00:14:18.065 } 00:14:18.065 ] 00:14:18.065 }' 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.065 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.327 [2024-11-26 15:30:16.758993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.327 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.588 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.588 "name": "Existed_Raid", 00:14:18.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.588 "strip_size_kb": 64, 00:14:18.588 "state": "configuring", 00:14:18.588 "raid_level": "raid5f", 00:14:18.588 "superblock": false, 00:14:18.588 "num_base_bdevs": 4, 00:14:18.588 "num_base_bdevs_discovered": 3, 00:14:18.588 "num_base_bdevs_operational": 4, 00:14:18.588 "base_bdevs_list": [ 00:14:18.588 { 00:14:18.588 "name": null, 00:14:18.588 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:18.588 "is_configured": false, 00:14:18.588 "data_offset": 0, 00:14:18.588 "data_size": 65536 00:14:18.588 }, 00:14:18.588 { 00:14:18.588 "name": "BaseBdev2", 00:14:18.588 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:18.588 "is_configured": true, 00:14:18.588 "data_offset": 0, 00:14:18.588 "data_size": 65536 00:14:18.588 }, 00:14:18.588 { 00:14:18.588 "name": "BaseBdev3", 00:14:18.588 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:18.588 "is_configured": true, 00:14:18.588 "data_offset": 0, 00:14:18.588 "data_size": 65536 00:14:18.588 }, 00:14:18.588 { 00:14:18.588 "name": "BaseBdev4", 00:14:18.588 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:18.588 "is_configured": true, 00:14:18.588 "data_offset": 0, 00:14:18.588 "data_size": 65536 00:14:18.588 } 00:14:18.588 ] 00:14:18.588 }' 00:14:18.588 15:30:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.588 15:30:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a2438c8b-7128-4d0f-b04f-6b364647fdfd 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.848 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.119 [2024-11-26 15:30:17.325980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:19.119 [2024-11-26 15:30:17.326088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:19.119 [2024-11-26 15:30:17.326118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:19.119 [2024-11-26 15:30:17.326414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:14:19.119 [2024-11-26 15:30:17.326908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:19.119 [2024-11-26 15:30:17.326952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:19.119 [2024-11-26 15:30:17.327172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.119 NewBaseBdev 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.119 [ 00:14:19.119 { 00:14:19.119 "name": "NewBaseBdev", 00:14:19.119 "aliases": [ 00:14:19.119 "a2438c8b-7128-4d0f-b04f-6b364647fdfd" 00:14:19.119 ], 00:14:19.119 "product_name": "Malloc disk", 00:14:19.119 "block_size": 512, 00:14:19.119 "num_blocks": 65536, 00:14:19.119 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:19.119 "assigned_rate_limits": { 00:14:19.119 "rw_ios_per_sec": 0, 00:14:19.119 "rw_mbytes_per_sec": 0, 00:14:19.119 "r_mbytes_per_sec": 0, 00:14:19.119 "w_mbytes_per_sec": 0 00:14:19.119 }, 00:14:19.119 "claimed": true, 00:14:19.119 "claim_type": "exclusive_write", 00:14:19.119 "zoned": false, 00:14:19.119 "supported_io_types": { 00:14:19.119 "read": true, 00:14:19.119 "write": true, 00:14:19.119 "unmap": true, 00:14:19.119 "flush": true, 00:14:19.119 "reset": true, 00:14:19.119 "nvme_admin": false, 00:14:19.119 "nvme_io": false, 00:14:19.119 "nvme_io_md": false, 00:14:19.119 "write_zeroes": true, 00:14:19.119 "zcopy": true, 00:14:19.119 "get_zone_info": false, 00:14:19.119 "zone_management": false, 00:14:19.119 "zone_append": false, 00:14:19.119 "compare": false, 00:14:19.119 "compare_and_write": false, 00:14:19.119 "abort": true, 00:14:19.119 "seek_hole": false, 00:14:19.119 "seek_data": false, 00:14:19.119 "copy": true, 00:14:19.119 "nvme_iov_md": false 00:14:19.119 }, 00:14:19.119 "memory_domains": [ 00:14:19.119 { 00:14:19.119 "dma_device_id": "system", 00:14:19.119 "dma_device_type": 1 00:14:19.119 }, 00:14:19.119 { 00:14:19.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.119 "dma_device_type": 2 00:14:19.119 } 00:14:19.119 ], 00:14:19.119 "driver_specific": {} 00:14:19.119 } 00:14:19.119 ] 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.119 "name": "Existed_Raid", 00:14:19.119 "uuid": "fa3d570a-1070-4040-bffc-251006c4ea3c", 00:14:19.119 "strip_size_kb": 64, 00:14:19.119 "state": "online", 00:14:19.119 "raid_level": "raid5f", 00:14:19.119 "superblock": false, 00:14:19.119 "num_base_bdevs": 4, 00:14:19.119 "num_base_bdevs_discovered": 4, 00:14:19.119 "num_base_bdevs_operational": 4, 00:14:19.119 "base_bdevs_list": [ 00:14:19.119 { 00:14:19.119 "name": "NewBaseBdev", 00:14:19.119 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:19.119 "is_configured": true, 00:14:19.119 "data_offset": 0, 00:14:19.119 "data_size": 65536 00:14:19.119 }, 00:14:19.119 { 00:14:19.119 "name": "BaseBdev2", 00:14:19.119 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:19.119 "is_configured": true, 00:14:19.119 "data_offset": 0, 00:14:19.119 "data_size": 65536 00:14:19.119 }, 00:14:19.119 { 00:14:19.119 "name": "BaseBdev3", 00:14:19.119 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:19.119 "is_configured": true, 00:14:19.119 "data_offset": 0, 00:14:19.119 "data_size": 65536 00:14:19.119 }, 00:14:19.119 { 00:14:19.119 "name": "BaseBdev4", 00:14:19.119 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:19.119 "is_configured": true, 00:14:19.119 "data_offset": 0, 00:14:19.119 "data_size": 65536 00:14:19.119 } 00:14:19.119 ] 00:14:19.119 }' 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.119 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.378 [2024-11-26 15:30:17.766332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:19.378 "name": "Existed_Raid", 00:14:19.378 "aliases": [ 00:14:19.378 "fa3d570a-1070-4040-bffc-251006c4ea3c" 00:14:19.378 ], 00:14:19.378 "product_name": "Raid Volume", 00:14:19.378 "block_size": 512, 00:14:19.378 "num_blocks": 196608, 00:14:19.378 "uuid": "fa3d570a-1070-4040-bffc-251006c4ea3c", 00:14:19.378 "assigned_rate_limits": { 00:14:19.378 "rw_ios_per_sec": 0, 00:14:19.378 "rw_mbytes_per_sec": 0, 00:14:19.378 "r_mbytes_per_sec": 0, 00:14:19.378 "w_mbytes_per_sec": 0 00:14:19.378 }, 00:14:19.378 "claimed": false, 00:14:19.378 "zoned": false, 00:14:19.378 "supported_io_types": { 00:14:19.378 "read": true, 00:14:19.378 "write": true, 00:14:19.378 "unmap": false, 00:14:19.378 "flush": false, 00:14:19.378 "reset": true, 00:14:19.378 "nvme_admin": false, 00:14:19.378 "nvme_io": false, 00:14:19.378 "nvme_io_md": false, 00:14:19.378 "write_zeroes": true, 00:14:19.378 "zcopy": false, 00:14:19.378 "get_zone_info": false, 00:14:19.378 "zone_management": false, 00:14:19.378 "zone_append": false, 00:14:19.378 "compare": false, 00:14:19.378 "compare_and_write": false, 00:14:19.378 "abort": false, 00:14:19.378 "seek_hole": false, 00:14:19.378 "seek_data": false, 00:14:19.378 "copy": false, 00:14:19.378 "nvme_iov_md": false 00:14:19.378 }, 00:14:19.378 "driver_specific": { 00:14:19.378 "raid": { 00:14:19.378 "uuid": "fa3d570a-1070-4040-bffc-251006c4ea3c", 00:14:19.378 "strip_size_kb": 64, 00:14:19.378 "state": "online", 00:14:19.378 "raid_level": "raid5f", 00:14:19.378 "superblock": false, 00:14:19.378 "num_base_bdevs": 4, 00:14:19.378 "num_base_bdevs_discovered": 4, 00:14:19.378 "num_base_bdevs_operational": 4, 00:14:19.378 "base_bdevs_list": [ 00:14:19.378 { 00:14:19.378 "name": "NewBaseBdev", 00:14:19.378 "uuid": "a2438c8b-7128-4d0f-b04f-6b364647fdfd", 00:14:19.378 "is_configured": true, 00:14:19.378 "data_offset": 0, 00:14:19.378 "data_size": 65536 00:14:19.378 }, 00:14:19.378 { 00:14:19.378 "name": "BaseBdev2", 00:14:19.378 "uuid": "a63a8a34-73e7-4238-bed8-7dbb34eed9c3", 00:14:19.378 "is_configured": true, 00:14:19.378 "data_offset": 0, 00:14:19.378 "data_size": 65536 00:14:19.378 }, 00:14:19.378 { 00:14:19.378 "name": "BaseBdev3", 00:14:19.378 "uuid": "abf6ca6a-57ed-4623-893c-e0c15e2edb94", 00:14:19.378 "is_configured": true, 00:14:19.378 "data_offset": 0, 00:14:19.378 "data_size": 65536 00:14:19.378 }, 00:14:19.378 { 00:14:19.378 "name": "BaseBdev4", 00:14:19.378 "uuid": "65c75b05-a37e-4659-83c5-e3c9ef024363", 00:14:19.378 "is_configured": true, 00:14:19.378 "data_offset": 0, 00:14:19.378 "data_size": 65536 00:14:19.378 } 00:14:19.378 ] 00:14:19.378 } 00:14:19.378 } 00:14:19.378 }' 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:19.378 BaseBdev2 00:14:19.378 BaseBdev3 00:14:19.378 BaseBdev4' 00:14:19.378 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.637 15:30:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.638 [2024-11-26 15:30:18.094207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.638 [2024-11-26 15:30:18.094268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.638 [2024-11-26 15:30:18.094353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.638 [2024-11-26 15:30:18.094614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.638 [2024-11-26 15:30:18.094671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 94752 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 94752 ']' 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 94752 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.638 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94752 00:14:19.897 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.897 killing process with pid 94752 00:14:19.897 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.897 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94752' 00:14:19.897 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 94752 00:14:19.897 [2024-11-26 15:30:18.141730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.897 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 94752 00:14:19.897 [2024-11-26 15:30:18.181346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.156 15:30:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:20.156 00:14:20.156 real 0m9.393s 00:14:20.156 user 0m16.133s 00:14:20.156 sys 0m1.962s 00:14:20.156 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.156 15:30:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.156 ************************************ 00:14:20.156 END TEST raid5f_state_function_test 00:14:20.156 ************************************ 00:14:20.156 15:30:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:20.156 15:30:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:20.156 15:30:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.156 15:30:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.156 ************************************ 00:14:20.156 START TEST raid5f_state_function_test_sb 00:14:20.156 ************************************ 00:14:20.156 15:30:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:14:20.156 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:20.156 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95397 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95397' 00:14:20.157 Process raid pid: 95397 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95397 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95397 ']' 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.157 15:30:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.157 [2024-11-26 15:30:18.558739] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:20.157 [2024-11-26 15:30:18.558933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.416 [2024-11-26 15:30:18.693326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:20.416 [2024-11-26 15:30:18.729907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.416 [2024-11-26 15:30:18.754770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.416 [2024-11-26 15:30:18.797437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.416 [2024-11-26 15:30:18.797464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.985 [2024-11-26 15:30:19.383856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.985 [2024-11-26 15:30:19.383967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.985 [2024-11-26 15:30:19.383984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.985 [2024-11-26 15:30:19.383993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.985 [2024-11-26 15:30:19.384003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.985 [2024-11-26 15:30:19.384010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.985 [2024-11-26 15:30:19.384019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:20.985 [2024-11-26 15:30:19.384026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.985 "name": "Existed_Raid", 00:14:20.985 "uuid": "383b2669-33df-458c-8063-9d1150862bea", 00:14:20.985 "strip_size_kb": 64, 00:14:20.985 "state": "configuring", 00:14:20.985 "raid_level": "raid5f", 00:14:20.985 "superblock": true, 00:14:20.985 "num_base_bdevs": 4, 00:14:20.985 "num_base_bdevs_discovered": 0, 00:14:20.985 "num_base_bdevs_operational": 4, 00:14:20.985 "base_bdevs_list": [ 00:14:20.985 { 00:14:20.985 "name": "BaseBdev1", 00:14:20.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.985 "is_configured": false, 00:14:20.985 "data_offset": 0, 00:14:20.985 "data_size": 0 00:14:20.985 }, 00:14:20.985 { 00:14:20.985 "name": "BaseBdev2", 00:14:20.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.985 "is_configured": false, 00:14:20.985 "data_offset": 0, 00:14:20.985 "data_size": 0 00:14:20.985 }, 00:14:20.985 { 00:14:20.985 "name": "BaseBdev3", 00:14:20.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.985 "is_configured": false, 00:14:20.985 "data_offset": 0, 00:14:20.985 "data_size": 0 00:14:20.985 }, 00:14:20.985 { 00:14:20.985 "name": "BaseBdev4", 00:14:20.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.985 "is_configured": false, 00:14:20.985 "data_offset": 0, 00:14:20.985 "data_size": 0 00:14:20.985 } 00:14:20.985 ] 00:14:20.985 }' 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.985 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 [2024-11-26 15:30:19.819852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.553 [2024-11-26 15:30:19.819936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 [2024-11-26 15:30:19.831885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.553 [2024-11-26 15:30:19.831955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.553 [2024-11-26 15:30:19.831999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.553 [2024-11-26 15:30:19.832019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.553 [2024-11-26 15:30:19.832038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.553 [2024-11-26 15:30:19.832056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.553 [2024-11-26 15:30:19.832075] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:21.553 [2024-11-26 15:30:19.832092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 [2024-11-26 15:30:19.852814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.553 BaseBdev1 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 [ 00:14:21.553 { 00:14:21.553 "name": "BaseBdev1", 00:14:21.553 "aliases": [ 00:14:21.553 "0dfbfef3-8af1-42d8-9adf-1df4954c8920" 00:14:21.553 ], 00:14:21.553 "product_name": "Malloc disk", 00:14:21.553 "block_size": 512, 00:14:21.553 "num_blocks": 65536, 00:14:21.553 "uuid": "0dfbfef3-8af1-42d8-9adf-1df4954c8920", 00:14:21.553 "assigned_rate_limits": { 00:14:21.553 "rw_ios_per_sec": 0, 00:14:21.553 "rw_mbytes_per_sec": 0, 00:14:21.553 "r_mbytes_per_sec": 0, 00:14:21.553 "w_mbytes_per_sec": 0 00:14:21.553 }, 00:14:21.553 "claimed": true, 00:14:21.553 "claim_type": "exclusive_write", 00:14:21.553 "zoned": false, 00:14:21.553 "supported_io_types": { 00:14:21.553 "read": true, 00:14:21.553 "write": true, 00:14:21.553 "unmap": true, 00:14:21.553 "flush": true, 00:14:21.553 "reset": true, 00:14:21.553 "nvme_admin": false, 00:14:21.553 "nvme_io": false, 00:14:21.553 "nvme_io_md": false, 00:14:21.553 "write_zeroes": true, 00:14:21.553 "zcopy": true, 00:14:21.553 "get_zone_info": false, 00:14:21.553 "zone_management": false, 00:14:21.553 "zone_append": false, 00:14:21.553 "compare": false, 00:14:21.553 "compare_and_write": false, 00:14:21.553 "abort": true, 00:14:21.553 "seek_hole": false, 00:14:21.553 "seek_data": false, 00:14:21.553 "copy": true, 00:14:21.553 "nvme_iov_md": false 00:14:21.553 }, 00:14:21.553 "memory_domains": [ 00:14:21.553 { 00:14:21.553 "dma_device_id": "system", 00:14:21.553 "dma_device_type": 1 00:14:21.553 }, 00:14:21.553 { 00:14:21.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.553 "dma_device_type": 2 00:14:21.553 } 00:14:21.553 ], 00:14:21.553 "driver_specific": {} 00:14:21.553 } 00:14:21.553 ] 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:21.553 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.554 "name": "Existed_Raid", 00:14:21.554 "uuid": "066c12f9-045d-48ca-be91-5ef83b62a38a", 00:14:21.554 "strip_size_kb": 64, 00:14:21.554 "state": "configuring", 00:14:21.554 "raid_level": "raid5f", 00:14:21.554 "superblock": true, 00:14:21.554 "num_base_bdevs": 4, 00:14:21.554 "num_base_bdevs_discovered": 1, 00:14:21.554 "num_base_bdevs_operational": 4, 00:14:21.554 "base_bdevs_list": [ 00:14:21.554 { 00:14:21.554 "name": "BaseBdev1", 00:14:21.554 "uuid": "0dfbfef3-8af1-42d8-9adf-1df4954c8920", 00:14:21.554 "is_configured": true, 00:14:21.554 "data_offset": 2048, 00:14:21.554 "data_size": 63488 00:14:21.554 }, 00:14:21.554 { 00:14:21.554 "name": "BaseBdev2", 00:14:21.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.554 "is_configured": false, 00:14:21.554 "data_offset": 0, 00:14:21.554 "data_size": 0 00:14:21.554 }, 00:14:21.554 { 00:14:21.554 "name": "BaseBdev3", 00:14:21.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.554 "is_configured": false, 00:14:21.554 "data_offset": 0, 00:14:21.554 "data_size": 0 00:14:21.554 }, 00:14:21.554 { 00:14:21.554 "name": "BaseBdev4", 00:14:21.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.554 "is_configured": false, 00:14:21.554 "data_offset": 0, 00:14:21.554 "data_size": 0 00:14:21.554 } 00:14:21.554 ] 00:14:21.554 }' 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.554 15:30:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.122 [2024-11-26 15:30:20.384985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.122 [2024-11-26 15:30:20.385042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.122 [2024-11-26 15:30:20.393035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.122 [2024-11-26 15:30:20.394897] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.122 [2024-11-26 15:30:20.394935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.122 [2024-11-26 15:30:20.394946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.122 [2024-11-26 15:30:20.394953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.122 [2024-11-26 15:30:20.394961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:22.122 [2024-11-26 15:30:20.394967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.122 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.123 "name": "Existed_Raid", 00:14:22.123 "uuid": "eb70de25-41db-4035-9cf7-fd9da84335ee", 00:14:22.123 "strip_size_kb": 64, 00:14:22.123 "state": "configuring", 00:14:22.123 "raid_level": "raid5f", 00:14:22.123 "superblock": true, 00:14:22.123 "num_base_bdevs": 4, 00:14:22.123 "num_base_bdevs_discovered": 1, 00:14:22.123 "num_base_bdevs_operational": 4, 00:14:22.123 "base_bdevs_list": [ 00:14:22.123 { 00:14:22.123 "name": "BaseBdev1", 00:14:22.123 "uuid": "0dfbfef3-8af1-42d8-9adf-1df4954c8920", 00:14:22.123 "is_configured": true, 00:14:22.123 "data_offset": 2048, 00:14:22.123 "data_size": 63488 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "name": "BaseBdev2", 00:14:22.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.123 "is_configured": false, 00:14:22.123 "data_offset": 0, 00:14:22.123 "data_size": 0 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "name": "BaseBdev3", 00:14:22.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.123 "is_configured": false, 00:14:22.123 "data_offset": 0, 00:14:22.123 "data_size": 0 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "name": "BaseBdev4", 00:14:22.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.123 "is_configured": false, 00:14:22.123 "data_offset": 0, 00:14:22.123 "data_size": 0 00:14:22.123 } 00:14:22.123 ] 00:14:22.123 }' 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.123 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.383 [2024-11-26 15:30:20.808232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.383 BaseBdev2 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.383 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.383 [ 00:14:22.383 { 00:14:22.383 "name": "BaseBdev2", 00:14:22.383 "aliases": [ 00:14:22.383 "9e431ed5-5f5a-4b6a-8be3-88105797bd63" 00:14:22.383 ], 00:14:22.383 "product_name": "Malloc disk", 00:14:22.383 "block_size": 512, 00:14:22.383 "num_blocks": 65536, 00:14:22.383 "uuid": "9e431ed5-5f5a-4b6a-8be3-88105797bd63", 00:14:22.383 "assigned_rate_limits": { 00:14:22.383 "rw_ios_per_sec": 0, 00:14:22.383 "rw_mbytes_per_sec": 0, 00:14:22.383 "r_mbytes_per_sec": 0, 00:14:22.383 "w_mbytes_per_sec": 0 00:14:22.383 }, 00:14:22.383 "claimed": true, 00:14:22.383 "claim_type": "exclusive_write", 00:14:22.383 "zoned": false, 00:14:22.383 "supported_io_types": { 00:14:22.383 "read": true, 00:14:22.383 "write": true, 00:14:22.383 "unmap": true, 00:14:22.383 "flush": true, 00:14:22.383 "reset": true, 00:14:22.383 "nvme_admin": false, 00:14:22.383 "nvme_io": false, 00:14:22.383 "nvme_io_md": false, 00:14:22.383 "write_zeroes": true, 00:14:22.383 "zcopy": true, 00:14:22.383 "get_zone_info": false, 00:14:22.383 "zone_management": false, 00:14:22.383 "zone_append": false, 00:14:22.383 "compare": false, 00:14:22.383 "compare_and_write": false, 00:14:22.383 "abort": true, 00:14:22.383 "seek_hole": false, 00:14:22.383 "seek_data": false, 00:14:22.383 "copy": true, 00:14:22.383 "nvme_iov_md": false 00:14:22.383 }, 00:14:22.384 "memory_domains": [ 00:14:22.384 { 00:14:22.384 "dma_device_id": "system", 00:14:22.384 "dma_device_type": 1 00:14:22.384 }, 00:14:22.384 { 00:14:22.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.384 "dma_device_type": 2 00:14:22.384 } 00:14:22.384 ], 00:14:22.384 "driver_specific": {} 00:14:22.384 } 00:14:22.384 ] 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.384 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.644 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.644 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.644 "name": "Existed_Raid", 00:14:22.644 "uuid": "eb70de25-41db-4035-9cf7-fd9da84335ee", 00:14:22.644 "strip_size_kb": 64, 00:14:22.644 "state": "configuring", 00:14:22.644 "raid_level": "raid5f", 00:14:22.644 "superblock": true, 00:14:22.644 "num_base_bdevs": 4, 00:14:22.644 "num_base_bdevs_discovered": 2, 00:14:22.644 "num_base_bdevs_operational": 4, 00:14:22.644 "base_bdevs_list": [ 00:14:22.644 { 00:14:22.644 "name": "BaseBdev1", 00:14:22.644 "uuid": "0dfbfef3-8af1-42d8-9adf-1df4954c8920", 00:14:22.644 "is_configured": true, 00:14:22.644 "data_offset": 2048, 00:14:22.644 "data_size": 63488 00:14:22.644 }, 00:14:22.644 { 00:14:22.644 "name": "BaseBdev2", 00:14:22.644 "uuid": "9e431ed5-5f5a-4b6a-8be3-88105797bd63", 00:14:22.644 "is_configured": true, 00:14:22.644 "data_offset": 2048, 00:14:22.644 "data_size": 63488 00:14:22.644 }, 00:14:22.644 { 00:14:22.644 "name": "BaseBdev3", 00:14:22.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.644 "is_configured": false, 00:14:22.644 "data_offset": 0, 00:14:22.644 "data_size": 0 00:14:22.644 }, 00:14:22.644 { 00:14:22.644 "name": "BaseBdev4", 00:14:22.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.644 "is_configured": false, 00:14:22.644 "data_offset": 0, 00:14:22.644 "data_size": 0 00:14:22.644 } 00:14:22.644 ] 00:14:22.644 }' 00:14:22.644 15:30:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.644 15:30:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.905 [2024-11-26 15:30:21.314023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.905 BaseBdev3 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.905 [ 00:14:22.905 { 00:14:22.905 "name": "BaseBdev3", 00:14:22.905 "aliases": [ 00:14:22.905 "74c6850b-897e-4e9e-b28c-1cb1c733b6d6" 00:14:22.905 ], 00:14:22.905 "product_name": "Malloc disk", 00:14:22.905 "block_size": 512, 00:14:22.905 "num_blocks": 65536, 00:14:22.905 "uuid": "74c6850b-897e-4e9e-b28c-1cb1c733b6d6", 00:14:22.905 "assigned_rate_limits": { 00:14:22.905 "rw_ios_per_sec": 0, 00:14:22.905 "rw_mbytes_per_sec": 0, 00:14:22.905 "r_mbytes_per_sec": 0, 00:14:22.905 "w_mbytes_per_sec": 0 00:14:22.905 }, 00:14:22.905 "claimed": true, 00:14:22.905 "claim_type": "exclusive_write", 00:14:22.905 "zoned": false, 00:14:22.905 "supported_io_types": { 00:14:22.905 "read": true, 00:14:22.905 "write": true, 00:14:22.905 "unmap": true, 00:14:22.905 "flush": true, 00:14:22.905 "reset": true, 00:14:22.905 "nvme_admin": false, 00:14:22.905 "nvme_io": false, 00:14:22.905 "nvme_io_md": false, 00:14:22.905 "write_zeroes": true, 00:14:22.905 "zcopy": true, 00:14:22.905 "get_zone_info": false, 00:14:22.905 "zone_management": false, 00:14:22.905 "zone_append": false, 00:14:22.905 "compare": false, 00:14:22.905 "compare_and_write": false, 00:14:22.905 "abort": true, 00:14:22.905 "seek_hole": false, 00:14:22.905 "seek_data": false, 00:14:22.905 "copy": true, 00:14:22.905 "nvme_iov_md": false 00:14:22.905 }, 00:14:22.905 "memory_domains": [ 00:14:22.905 { 00:14:22.905 "dma_device_id": "system", 00:14:22.905 "dma_device_type": 1 00:14:22.905 }, 00:14:22.905 { 00:14:22.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.905 "dma_device_type": 2 00:14:22.905 } 00:14:22.905 ], 00:14:22.905 "driver_specific": {} 00:14:22.905 } 00:14:22.905 ] 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.905 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.906 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.167 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.167 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.167 "name": "Existed_Raid", 00:14:23.167 "uuid": "eb70de25-41db-4035-9cf7-fd9da84335ee", 00:14:23.167 "strip_size_kb": 64, 00:14:23.167 "state": "configuring", 00:14:23.167 "raid_level": "raid5f", 00:14:23.167 "superblock": true, 00:14:23.167 "num_base_bdevs": 4, 00:14:23.167 "num_base_bdevs_discovered": 3, 00:14:23.167 "num_base_bdevs_operational": 4, 00:14:23.167 "base_bdevs_list": [ 00:14:23.167 { 00:14:23.167 "name": "BaseBdev1", 00:14:23.167 "uuid": "0dfbfef3-8af1-42d8-9adf-1df4954c8920", 00:14:23.167 "is_configured": true, 00:14:23.167 "data_offset": 2048, 00:14:23.167 "data_size": 63488 00:14:23.167 }, 00:14:23.167 { 00:14:23.167 "name": "BaseBdev2", 00:14:23.167 "uuid": "9e431ed5-5f5a-4b6a-8be3-88105797bd63", 00:14:23.167 "is_configured": true, 00:14:23.167 "data_offset": 2048, 00:14:23.167 "data_size": 63488 00:14:23.167 }, 00:14:23.167 { 00:14:23.167 "name": "BaseBdev3", 00:14:23.167 "uuid": "74c6850b-897e-4e9e-b28c-1cb1c733b6d6", 00:14:23.167 "is_configured": true, 00:14:23.167 "data_offset": 2048, 00:14:23.167 "data_size": 63488 00:14:23.167 }, 00:14:23.167 { 00:14:23.167 "name": "BaseBdev4", 00:14:23.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.167 "is_configured": false, 00:14:23.167 "data_offset": 0, 00:14:23.167 "data_size": 0 00:14:23.167 } 00:14:23.167 ] 00:14:23.167 }' 00:14:23.167 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.167 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.427 [2024-11-26 15:30:21.825174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:23.427 [2024-11-26 15:30:21.825456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:23.427 [2024-11-26 15:30:21.825508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:23.427 [2024-11-26 15:30:21.825816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:23.427 BaseBdev4 00:14:23.427 [2024-11-26 15:30:21.826310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:23.427 [2024-11-26 15:30:21.826362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:23.427 [2024-11-26 15:30:21.826531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.427 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.427 [ 00:14:23.427 { 00:14:23.427 "name": "BaseBdev4", 00:14:23.427 "aliases": [ 00:14:23.427 "7d7f31ce-85ea-4a26-9d6f-b0e7d3b2ea5b" 00:14:23.427 ], 00:14:23.427 "product_name": "Malloc disk", 00:14:23.427 "block_size": 512, 00:14:23.427 "num_blocks": 65536, 00:14:23.427 "uuid": "7d7f31ce-85ea-4a26-9d6f-b0e7d3b2ea5b", 00:14:23.427 "assigned_rate_limits": { 00:14:23.427 "rw_ios_per_sec": 0, 00:14:23.427 "rw_mbytes_per_sec": 0, 00:14:23.427 "r_mbytes_per_sec": 0, 00:14:23.427 "w_mbytes_per_sec": 0 00:14:23.427 }, 00:14:23.427 "claimed": true, 00:14:23.427 "claim_type": "exclusive_write", 00:14:23.428 "zoned": false, 00:14:23.428 "supported_io_types": { 00:14:23.428 "read": true, 00:14:23.428 "write": true, 00:14:23.428 "unmap": true, 00:14:23.428 "flush": true, 00:14:23.428 "reset": true, 00:14:23.428 "nvme_admin": false, 00:14:23.428 "nvme_io": false, 00:14:23.428 "nvme_io_md": false, 00:14:23.428 "write_zeroes": true, 00:14:23.428 "zcopy": true, 00:14:23.428 "get_zone_info": false, 00:14:23.428 "zone_management": false, 00:14:23.428 "zone_append": false, 00:14:23.428 "compare": false, 00:14:23.428 "compare_and_write": false, 00:14:23.428 "abort": true, 00:14:23.428 "seek_hole": false, 00:14:23.428 "seek_data": false, 00:14:23.428 "copy": true, 00:14:23.428 "nvme_iov_md": false 00:14:23.428 }, 00:14:23.428 "memory_domains": [ 00:14:23.428 { 00:14:23.428 "dma_device_id": "system", 00:14:23.428 "dma_device_type": 1 00:14:23.428 }, 00:14:23.428 { 00:14:23.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.428 "dma_device_type": 2 00:14:23.428 } 00:14:23.428 ], 00:14:23.428 "driver_specific": {} 00:14:23.428 } 00:14:23.428 ] 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.428 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.699 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.699 "name": "Existed_Raid", 00:14:23.699 "uuid": "eb70de25-41db-4035-9cf7-fd9da84335ee", 00:14:23.699 "strip_size_kb": 64, 00:14:23.699 "state": "online", 00:14:23.699 "raid_level": "raid5f", 00:14:23.699 "superblock": true, 00:14:23.699 "num_base_bdevs": 4, 00:14:23.699 "num_base_bdevs_discovered": 4, 00:14:23.699 "num_base_bdevs_operational": 4, 00:14:23.699 "base_bdevs_list": [ 00:14:23.699 { 00:14:23.699 "name": "BaseBdev1", 00:14:23.699 "uuid": "0dfbfef3-8af1-42d8-9adf-1df4954c8920", 00:14:23.699 "is_configured": true, 00:14:23.699 "data_offset": 2048, 00:14:23.699 "data_size": 63488 00:14:23.699 }, 00:14:23.699 { 00:14:23.699 "name": "BaseBdev2", 00:14:23.699 "uuid": "9e431ed5-5f5a-4b6a-8be3-88105797bd63", 00:14:23.699 "is_configured": true, 00:14:23.699 "data_offset": 2048, 00:14:23.699 "data_size": 63488 00:14:23.699 }, 00:14:23.699 { 00:14:23.699 "name": "BaseBdev3", 00:14:23.699 "uuid": "74c6850b-897e-4e9e-b28c-1cb1c733b6d6", 00:14:23.699 "is_configured": true, 00:14:23.699 "data_offset": 2048, 00:14:23.699 "data_size": 63488 00:14:23.699 }, 00:14:23.699 { 00:14:23.699 "name": "BaseBdev4", 00:14:23.699 "uuid": "7d7f31ce-85ea-4a26-9d6f-b0e7d3b2ea5b", 00:14:23.699 "is_configured": true, 00:14:23.699 "data_offset": 2048, 00:14:23.699 "data_size": 63488 00:14:23.699 } 00:14:23.699 ] 00:14:23.699 }' 00:14:23.699 15:30:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.699 15:30:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.975 [2024-11-26 15:30:22.257566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.975 "name": "Existed_Raid", 00:14:23.975 "aliases": [ 00:14:23.975 "eb70de25-41db-4035-9cf7-fd9da84335ee" 00:14:23.975 ], 00:14:23.975 "product_name": "Raid Volume", 00:14:23.975 "block_size": 512, 00:14:23.975 "num_blocks": 190464, 00:14:23.975 "uuid": "eb70de25-41db-4035-9cf7-fd9da84335ee", 00:14:23.975 "assigned_rate_limits": { 00:14:23.975 "rw_ios_per_sec": 0, 00:14:23.975 "rw_mbytes_per_sec": 0, 00:14:23.975 "r_mbytes_per_sec": 0, 00:14:23.975 "w_mbytes_per_sec": 0 00:14:23.975 }, 00:14:23.975 "claimed": false, 00:14:23.975 "zoned": false, 00:14:23.975 "supported_io_types": { 00:14:23.975 "read": true, 00:14:23.975 "write": true, 00:14:23.975 "unmap": false, 00:14:23.975 "flush": false, 00:14:23.975 "reset": true, 00:14:23.975 "nvme_admin": false, 00:14:23.975 "nvme_io": false, 00:14:23.975 "nvme_io_md": false, 00:14:23.975 "write_zeroes": true, 00:14:23.975 "zcopy": false, 00:14:23.975 "get_zone_info": false, 00:14:23.975 "zone_management": false, 00:14:23.975 "zone_append": false, 00:14:23.975 "compare": false, 00:14:23.975 "compare_and_write": false, 00:14:23.975 "abort": false, 00:14:23.975 "seek_hole": false, 00:14:23.975 "seek_data": false, 00:14:23.975 "copy": false, 00:14:23.975 "nvme_iov_md": false 00:14:23.975 }, 00:14:23.975 "driver_specific": { 00:14:23.975 "raid": { 00:14:23.975 "uuid": "eb70de25-41db-4035-9cf7-fd9da84335ee", 00:14:23.975 "strip_size_kb": 64, 00:14:23.975 "state": "online", 00:14:23.975 "raid_level": "raid5f", 00:14:23.975 "superblock": true, 00:14:23.975 "num_base_bdevs": 4, 00:14:23.975 "num_base_bdevs_discovered": 4, 00:14:23.975 "num_base_bdevs_operational": 4, 00:14:23.975 "base_bdevs_list": [ 00:14:23.975 { 00:14:23.975 "name": "BaseBdev1", 00:14:23.975 "uuid": "0dfbfef3-8af1-42d8-9adf-1df4954c8920", 00:14:23.975 "is_configured": true, 00:14:23.975 "data_offset": 2048, 00:14:23.975 "data_size": 63488 00:14:23.975 }, 00:14:23.975 { 00:14:23.975 "name": "BaseBdev2", 00:14:23.975 "uuid": "9e431ed5-5f5a-4b6a-8be3-88105797bd63", 00:14:23.975 "is_configured": true, 00:14:23.975 "data_offset": 2048, 00:14:23.975 "data_size": 63488 00:14:23.975 }, 00:14:23.975 { 00:14:23.975 "name": "BaseBdev3", 00:14:23.975 "uuid": "74c6850b-897e-4e9e-b28c-1cb1c733b6d6", 00:14:23.975 "is_configured": true, 00:14:23.975 "data_offset": 2048, 00:14:23.975 "data_size": 63488 00:14:23.975 }, 00:14:23.975 { 00:14:23.975 "name": "BaseBdev4", 00:14:23.975 "uuid": "7d7f31ce-85ea-4a26-9d6f-b0e7d3b2ea5b", 00:14:23.975 "is_configured": true, 00:14:23.975 "data_offset": 2048, 00:14:23.975 "data_size": 63488 00:14:23.975 } 00:14:23.975 ] 00:14:23.975 } 00:14:23.975 } 00:14:23.975 }' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:23.975 BaseBdev2 00:14:23.975 BaseBdev3 00:14:23.975 BaseBdev4' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.975 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.235 [2024-11-26 15:30:22.593474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.235 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.236 "name": "Existed_Raid", 00:14:24.236 "uuid": "eb70de25-41db-4035-9cf7-fd9da84335ee", 00:14:24.236 "strip_size_kb": 64, 00:14:24.236 "state": "online", 00:14:24.236 "raid_level": "raid5f", 00:14:24.236 "superblock": true, 00:14:24.236 "num_base_bdevs": 4, 00:14:24.236 "num_base_bdevs_discovered": 3, 00:14:24.236 "num_base_bdevs_operational": 3, 00:14:24.236 "base_bdevs_list": [ 00:14:24.236 { 00:14:24.236 "name": null, 00:14:24.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.236 "is_configured": false, 00:14:24.236 "data_offset": 0, 00:14:24.236 "data_size": 63488 00:14:24.236 }, 00:14:24.236 { 00:14:24.236 "name": "BaseBdev2", 00:14:24.236 "uuid": "9e431ed5-5f5a-4b6a-8be3-88105797bd63", 00:14:24.236 "is_configured": true, 00:14:24.236 "data_offset": 2048, 00:14:24.236 "data_size": 63488 00:14:24.236 }, 00:14:24.236 { 00:14:24.236 "name": "BaseBdev3", 00:14:24.236 "uuid": "74c6850b-897e-4e9e-b28c-1cb1c733b6d6", 00:14:24.236 "is_configured": true, 00:14:24.236 "data_offset": 2048, 00:14:24.236 "data_size": 63488 00:14:24.236 }, 00:14:24.236 { 00:14:24.236 "name": "BaseBdev4", 00:14:24.236 "uuid": "7d7f31ce-85ea-4a26-9d6f-b0e7d3b2ea5b", 00:14:24.236 "is_configured": true, 00:14:24.236 "data_offset": 2048, 00:14:24.236 "data_size": 63488 00:14:24.236 } 00:14:24.236 ] 00:14:24.236 }' 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.236 15:30:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 [2024-11-26 15:30:23.081127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.805 [2024-11-26 15:30:23.081356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.805 [2024-11-26 15:30:23.092576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 [2024-11-26 15:30:23.148629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 [2024-11-26 15:30:23.219911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:24.805 [2024-11-26 15:30:23.219961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.805 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.065 BaseBdev2 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.065 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.065 [ 00:14:25.065 { 00:14:25.065 "name": "BaseBdev2", 00:14:25.065 "aliases": [ 00:14:25.065 "fe530176-bb50-4caf-96e1-8456a821ad51" 00:14:25.065 ], 00:14:25.065 "product_name": "Malloc disk", 00:14:25.065 "block_size": 512, 00:14:25.065 "num_blocks": 65536, 00:14:25.066 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:25.066 "assigned_rate_limits": { 00:14:25.066 "rw_ios_per_sec": 0, 00:14:25.066 "rw_mbytes_per_sec": 0, 00:14:25.066 "r_mbytes_per_sec": 0, 00:14:25.066 "w_mbytes_per_sec": 0 00:14:25.066 }, 00:14:25.066 "claimed": false, 00:14:25.066 "zoned": false, 00:14:25.066 "supported_io_types": { 00:14:25.066 "read": true, 00:14:25.066 "write": true, 00:14:25.066 "unmap": true, 00:14:25.066 "flush": true, 00:14:25.066 "reset": true, 00:14:25.066 "nvme_admin": false, 00:14:25.066 "nvme_io": false, 00:14:25.066 "nvme_io_md": false, 00:14:25.066 "write_zeroes": true, 00:14:25.066 "zcopy": true, 00:14:25.066 "get_zone_info": false, 00:14:25.066 "zone_management": false, 00:14:25.066 "zone_append": false, 00:14:25.066 "compare": false, 00:14:25.066 "compare_and_write": false, 00:14:25.066 "abort": true, 00:14:25.066 "seek_hole": false, 00:14:25.066 "seek_data": false, 00:14:25.066 "copy": true, 00:14:25.066 "nvme_iov_md": false 00:14:25.066 }, 00:14:25.066 "memory_domains": [ 00:14:25.066 { 00:14:25.066 "dma_device_id": "system", 00:14:25.066 "dma_device_type": 1 00:14:25.066 }, 00:14:25.066 { 00:14:25.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.066 "dma_device_type": 2 00:14:25.066 } 00:14:25.066 ], 00:14:25.066 "driver_specific": {} 00:14:25.066 } 00:14:25.066 ] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.066 BaseBdev3 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.066 [ 00:14:25.066 { 00:14:25.066 "name": "BaseBdev3", 00:14:25.066 "aliases": [ 00:14:25.066 "851ea8b1-8406-43f9-8935-6cb52f3586a8" 00:14:25.066 ], 00:14:25.066 "product_name": "Malloc disk", 00:14:25.066 "block_size": 512, 00:14:25.066 "num_blocks": 65536, 00:14:25.066 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:25.066 "assigned_rate_limits": { 00:14:25.066 "rw_ios_per_sec": 0, 00:14:25.066 "rw_mbytes_per_sec": 0, 00:14:25.066 "r_mbytes_per_sec": 0, 00:14:25.066 "w_mbytes_per_sec": 0 00:14:25.066 }, 00:14:25.066 "claimed": false, 00:14:25.066 "zoned": false, 00:14:25.066 "supported_io_types": { 00:14:25.066 "read": true, 00:14:25.066 "write": true, 00:14:25.066 "unmap": true, 00:14:25.066 "flush": true, 00:14:25.066 "reset": true, 00:14:25.066 "nvme_admin": false, 00:14:25.066 "nvme_io": false, 00:14:25.066 "nvme_io_md": false, 00:14:25.066 "write_zeroes": true, 00:14:25.066 "zcopy": true, 00:14:25.066 "get_zone_info": false, 00:14:25.066 "zone_management": false, 00:14:25.066 "zone_append": false, 00:14:25.066 "compare": false, 00:14:25.066 "compare_and_write": false, 00:14:25.066 "abort": true, 00:14:25.066 "seek_hole": false, 00:14:25.066 "seek_data": false, 00:14:25.066 "copy": true, 00:14:25.066 "nvme_iov_md": false 00:14:25.066 }, 00:14:25.066 "memory_domains": [ 00:14:25.066 { 00:14:25.066 "dma_device_id": "system", 00:14:25.066 "dma_device_type": 1 00:14:25.066 }, 00:14:25.066 { 00:14:25.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.066 "dma_device_type": 2 00:14:25.066 } 00:14:25.066 ], 00:14:25.066 "driver_specific": {} 00:14:25.066 } 00:14:25.066 ] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.066 BaseBdev4 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.066 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.066 [ 00:14:25.066 { 00:14:25.066 "name": "BaseBdev4", 00:14:25.066 "aliases": [ 00:14:25.066 "22c1a6e2-35ce-455c-89f8-b76e51f52b46" 00:14:25.066 ], 00:14:25.066 "product_name": "Malloc disk", 00:14:25.066 "block_size": 512, 00:14:25.066 "num_blocks": 65536, 00:14:25.066 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:25.066 "assigned_rate_limits": { 00:14:25.066 "rw_ios_per_sec": 0, 00:14:25.067 "rw_mbytes_per_sec": 0, 00:14:25.067 "r_mbytes_per_sec": 0, 00:14:25.067 "w_mbytes_per_sec": 0 00:14:25.067 }, 00:14:25.067 "claimed": false, 00:14:25.067 "zoned": false, 00:14:25.067 "supported_io_types": { 00:14:25.067 "read": true, 00:14:25.067 "write": true, 00:14:25.067 "unmap": true, 00:14:25.067 "flush": true, 00:14:25.067 "reset": true, 00:14:25.067 "nvme_admin": false, 00:14:25.067 "nvme_io": false, 00:14:25.067 "nvme_io_md": false, 00:14:25.067 "write_zeroes": true, 00:14:25.067 "zcopy": true, 00:14:25.067 "get_zone_info": false, 00:14:25.067 "zone_management": false, 00:14:25.067 "zone_append": false, 00:14:25.067 "compare": false, 00:14:25.067 "compare_and_write": false, 00:14:25.067 "abort": true, 00:14:25.067 "seek_hole": false, 00:14:25.067 "seek_data": false, 00:14:25.067 "copy": true, 00:14:25.067 "nvme_iov_md": false 00:14:25.067 }, 00:14:25.067 "memory_domains": [ 00:14:25.067 { 00:14:25.067 "dma_device_id": "system", 00:14:25.067 "dma_device_type": 1 00:14:25.067 }, 00:14:25.067 { 00:14:25.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.067 "dma_device_type": 2 00:14:25.067 } 00:14:25.067 ], 00:14:25.067 "driver_specific": {} 00:14:25.067 } 00:14:25.067 ] 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.067 [2024-11-26 15:30:23.453273] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.067 [2024-11-26 15:30:23.453317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.067 [2024-11-26 15:30:23.453335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.067 [2024-11-26 15:30:23.455091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.067 [2024-11-26 15:30:23.455136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.067 "name": "Existed_Raid", 00:14:25.067 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:25.067 "strip_size_kb": 64, 00:14:25.067 "state": "configuring", 00:14:25.067 "raid_level": "raid5f", 00:14:25.067 "superblock": true, 00:14:25.067 "num_base_bdevs": 4, 00:14:25.067 "num_base_bdevs_discovered": 3, 00:14:25.067 "num_base_bdevs_operational": 4, 00:14:25.067 "base_bdevs_list": [ 00:14:25.067 { 00:14:25.067 "name": "BaseBdev1", 00:14:25.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.067 "is_configured": false, 00:14:25.067 "data_offset": 0, 00:14:25.067 "data_size": 0 00:14:25.067 }, 00:14:25.067 { 00:14:25.067 "name": "BaseBdev2", 00:14:25.067 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:25.067 "is_configured": true, 00:14:25.067 "data_offset": 2048, 00:14:25.067 "data_size": 63488 00:14:25.067 }, 00:14:25.067 { 00:14:25.067 "name": "BaseBdev3", 00:14:25.067 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:25.067 "is_configured": true, 00:14:25.067 "data_offset": 2048, 00:14:25.067 "data_size": 63488 00:14:25.067 }, 00:14:25.067 { 00:14:25.067 "name": "BaseBdev4", 00:14:25.067 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:25.067 "is_configured": true, 00:14:25.067 "data_offset": 2048, 00:14:25.067 "data_size": 63488 00:14:25.067 } 00:14:25.067 ] 00:14:25.067 }' 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.067 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.637 [2024-11-26 15:30:23.885370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.637 "name": "Existed_Raid", 00:14:25.637 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:25.637 "strip_size_kb": 64, 00:14:25.637 "state": "configuring", 00:14:25.637 "raid_level": "raid5f", 00:14:25.637 "superblock": true, 00:14:25.637 "num_base_bdevs": 4, 00:14:25.637 "num_base_bdevs_discovered": 2, 00:14:25.637 "num_base_bdevs_operational": 4, 00:14:25.637 "base_bdevs_list": [ 00:14:25.637 { 00:14:25.637 "name": "BaseBdev1", 00:14:25.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.637 "is_configured": false, 00:14:25.637 "data_offset": 0, 00:14:25.637 "data_size": 0 00:14:25.637 }, 00:14:25.637 { 00:14:25.637 "name": null, 00:14:25.637 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:25.637 "is_configured": false, 00:14:25.637 "data_offset": 0, 00:14:25.637 "data_size": 63488 00:14:25.637 }, 00:14:25.637 { 00:14:25.637 "name": "BaseBdev3", 00:14:25.637 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:25.637 "is_configured": true, 00:14:25.637 "data_offset": 2048, 00:14:25.637 "data_size": 63488 00:14:25.637 }, 00:14:25.637 { 00:14:25.637 "name": "BaseBdev4", 00:14:25.637 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:25.637 "is_configured": true, 00:14:25.637 "data_offset": 2048, 00:14:25.637 "data_size": 63488 00:14:25.637 } 00:14:25.637 ] 00:14:25.637 }' 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.637 15:30:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.898 [2024-11-26 15:30:24.360636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.898 BaseBdev1 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.898 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.158 [ 00:14:26.158 { 00:14:26.158 "name": "BaseBdev1", 00:14:26.158 "aliases": [ 00:14:26.158 "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab" 00:14:26.158 ], 00:14:26.158 "product_name": "Malloc disk", 00:14:26.158 "block_size": 512, 00:14:26.158 "num_blocks": 65536, 00:14:26.158 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:26.158 "assigned_rate_limits": { 00:14:26.158 "rw_ios_per_sec": 0, 00:14:26.158 "rw_mbytes_per_sec": 0, 00:14:26.158 "r_mbytes_per_sec": 0, 00:14:26.158 "w_mbytes_per_sec": 0 00:14:26.158 }, 00:14:26.158 "claimed": true, 00:14:26.158 "claim_type": "exclusive_write", 00:14:26.158 "zoned": false, 00:14:26.158 "supported_io_types": { 00:14:26.158 "read": true, 00:14:26.158 "write": true, 00:14:26.158 "unmap": true, 00:14:26.158 "flush": true, 00:14:26.158 "reset": true, 00:14:26.158 "nvme_admin": false, 00:14:26.158 "nvme_io": false, 00:14:26.158 "nvme_io_md": false, 00:14:26.158 "write_zeroes": true, 00:14:26.158 "zcopy": true, 00:14:26.158 "get_zone_info": false, 00:14:26.158 "zone_management": false, 00:14:26.158 "zone_append": false, 00:14:26.158 "compare": false, 00:14:26.158 "compare_and_write": false, 00:14:26.158 "abort": true, 00:14:26.158 "seek_hole": false, 00:14:26.158 "seek_data": false, 00:14:26.158 "copy": true, 00:14:26.158 "nvme_iov_md": false 00:14:26.158 }, 00:14:26.158 "memory_domains": [ 00:14:26.158 { 00:14:26.158 "dma_device_id": "system", 00:14:26.158 "dma_device_type": 1 00:14:26.158 }, 00:14:26.158 { 00:14:26.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.158 "dma_device_type": 2 00:14:26.158 } 00:14:26.158 ], 00:14:26.158 "driver_specific": {} 00:14:26.158 } 00:14:26.158 ] 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.158 "name": "Existed_Raid", 00:14:26.158 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:26.158 "strip_size_kb": 64, 00:14:26.158 "state": "configuring", 00:14:26.158 "raid_level": "raid5f", 00:14:26.158 "superblock": true, 00:14:26.158 "num_base_bdevs": 4, 00:14:26.158 "num_base_bdevs_discovered": 3, 00:14:26.158 "num_base_bdevs_operational": 4, 00:14:26.158 "base_bdevs_list": [ 00:14:26.158 { 00:14:26.158 "name": "BaseBdev1", 00:14:26.158 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:26.158 "is_configured": true, 00:14:26.158 "data_offset": 2048, 00:14:26.158 "data_size": 63488 00:14:26.158 }, 00:14:26.158 { 00:14:26.158 "name": null, 00:14:26.158 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:26.158 "is_configured": false, 00:14:26.158 "data_offset": 0, 00:14:26.158 "data_size": 63488 00:14:26.158 }, 00:14:26.158 { 00:14:26.158 "name": "BaseBdev3", 00:14:26.158 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:26.158 "is_configured": true, 00:14:26.158 "data_offset": 2048, 00:14:26.158 "data_size": 63488 00:14:26.158 }, 00:14:26.158 { 00:14:26.158 "name": "BaseBdev4", 00:14:26.158 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:26.158 "is_configured": true, 00:14:26.158 "data_offset": 2048, 00:14:26.158 "data_size": 63488 00:14:26.158 } 00:14:26.158 ] 00:14:26.158 }' 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.158 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.418 [2024-11-26 15:30:24.868833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.418 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.678 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.678 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.678 "name": "Existed_Raid", 00:14:26.678 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:26.678 "strip_size_kb": 64, 00:14:26.678 "state": "configuring", 00:14:26.678 "raid_level": "raid5f", 00:14:26.678 "superblock": true, 00:14:26.678 "num_base_bdevs": 4, 00:14:26.678 "num_base_bdevs_discovered": 2, 00:14:26.678 "num_base_bdevs_operational": 4, 00:14:26.678 "base_bdevs_list": [ 00:14:26.678 { 00:14:26.678 "name": "BaseBdev1", 00:14:26.678 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:26.678 "is_configured": true, 00:14:26.678 "data_offset": 2048, 00:14:26.678 "data_size": 63488 00:14:26.678 }, 00:14:26.678 { 00:14:26.678 "name": null, 00:14:26.678 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:26.678 "is_configured": false, 00:14:26.678 "data_offset": 0, 00:14:26.678 "data_size": 63488 00:14:26.678 }, 00:14:26.678 { 00:14:26.678 "name": null, 00:14:26.678 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:26.678 "is_configured": false, 00:14:26.678 "data_offset": 0, 00:14:26.678 "data_size": 63488 00:14:26.678 }, 00:14:26.678 { 00:14:26.678 "name": "BaseBdev4", 00:14:26.678 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:26.678 "is_configured": true, 00:14:26.678 "data_offset": 2048, 00:14:26.678 "data_size": 63488 00:14:26.678 } 00:14:26.678 ] 00:14:26.678 }' 00:14:26.678 15:30:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.678 15:30:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.938 [2024-11-26 15:30:25.345004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.938 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.938 "name": "Existed_Raid", 00:14:26.938 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:26.938 "strip_size_kb": 64, 00:14:26.938 "state": "configuring", 00:14:26.938 "raid_level": "raid5f", 00:14:26.938 "superblock": true, 00:14:26.938 "num_base_bdevs": 4, 00:14:26.939 "num_base_bdevs_discovered": 3, 00:14:26.939 "num_base_bdevs_operational": 4, 00:14:26.939 "base_bdevs_list": [ 00:14:26.939 { 00:14:26.939 "name": "BaseBdev1", 00:14:26.939 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 2048, 00:14:26.939 "data_size": 63488 00:14:26.939 }, 00:14:26.939 { 00:14:26.939 "name": null, 00:14:26.939 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:26.939 "is_configured": false, 00:14:26.939 "data_offset": 0, 00:14:26.939 "data_size": 63488 00:14:26.939 }, 00:14:26.939 { 00:14:26.939 "name": "BaseBdev3", 00:14:26.939 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 2048, 00:14:26.939 "data_size": 63488 00:14:26.939 }, 00:14:26.939 { 00:14:26.939 "name": "BaseBdev4", 00:14:26.939 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 2048, 00:14:26.939 "data_size": 63488 00:14:26.939 } 00:14:26.939 ] 00:14:26.939 }' 00:14:26.939 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.939 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.509 [2024-11-26 15:30:25.769119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.509 "name": "Existed_Raid", 00:14:27.509 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:27.509 "strip_size_kb": 64, 00:14:27.509 "state": "configuring", 00:14:27.509 "raid_level": "raid5f", 00:14:27.509 "superblock": true, 00:14:27.509 "num_base_bdevs": 4, 00:14:27.509 "num_base_bdevs_discovered": 2, 00:14:27.509 "num_base_bdevs_operational": 4, 00:14:27.509 "base_bdevs_list": [ 00:14:27.509 { 00:14:27.509 "name": null, 00:14:27.509 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:27.509 "is_configured": false, 00:14:27.509 "data_offset": 0, 00:14:27.509 "data_size": 63488 00:14:27.509 }, 00:14:27.509 { 00:14:27.509 "name": null, 00:14:27.509 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:27.509 "is_configured": false, 00:14:27.509 "data_offset": 0, 00:14:27.509 "data_size": 63488 00:14:27.509 }, 00:14:27.509 { 00:14:27.509 "name": "BaseBdev3", 00:14:27.509 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:27.509 "is_configured": true, 00:14:27.509 "data_offset": 2048, 00:14:27.509 "data_size": 63488 00:14:27.509 }, 00:14:27.509 { 00:14:27.509 "name": "BaseBdev4", 00:14:27.509 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:27.509 "is_configured": true, 00:14:27.509 "data_offset": 2048, 00:14:27.509 "data_size": 63488 00:14:27.509 } 00:14:27.509 ] 00:14:27.509 }' 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.509 15:30:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.078 [2024-11-26 15:30:26.303972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.078 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.078 "name": "Existed_Raid", 00:14:28.078 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:28.078 "strip_size_kb": 64, 00:14:28.078 "state": "configuring", 00:14:28.078 "raid_level": "raid5f", 00:14:28.078 "superblock": true, 00:14:28.078 "num_base_bdevs": 4, 00:14:28.078 "num_base_bdevs_discovered": 3, 00:14:28.078 "num_base_bdevs_operational": 4, 00:14:28.078 "base_bdevs_list": [ 00:14:28.078 { 00:14:28.079 "name": null, 00:14:28.079 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:28.079 "is_configured": false, 00:14:28.079 "data_offset": 0, 00:14:28.079 "data_size": 63488 00:14:28.079 }, 00:14:28.079 { 00:14:28.079 "name": "BaseBdev2", 00:14:28.079 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:28.079 "is_configured": true, 00:14:28.079 "data_offset": 2048, 00:14:28.079 "data_size": 63488 00:14:28.079 }, 00:14:28.079 { 00:14:28.079 "name": "BaseBdev3", 00:14:28.079 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:28.079 "is_configured": true, 00:14:28.079 "data_offset": 2048, 00:14:28.079 "data_size": 63488 00:14:28.079 }, 00:14:28.079 { 00:14:28.079 "name": "BaseBdev4", 00:14:28.079 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:28.079 "is_configured": true, 00:14:28.079 "data_offset": 2048, 00:14:28.079 "data_size": 63488 00:14:28.079 } 00:14:28.079 ] 00:14:28.079 }' 00:14:28.079 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.079 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.338 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.339 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.339 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.339 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.339 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b1678ba6-fe69-4fd4-ba05-d0cb117d44ab 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.598 [2024-11-26 15:30:26.883272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:28.598 NewBaseBdev 00:14:28.598 [2024-11-26 15:30:26.883521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.598 [2024-11-26 15:30:26.883542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:28.598 [2024-11-26 15:30:26.883793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:14:28.598 [2024-11-26 15:30:26.884282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.598 [2024-11-26 15:30:26.884294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:28.598 [2024-11-26 15:30:26.884400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.598 [ 00:14:28.598 { 00:14:28.598 "name": "NewBaseBdev", 00:14:28.598 "aliases": [ 00:14:28.598 "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab" 00:14:28.598 ], 00:14:28.598 "product_name": "Malloc disk", 00:14:28.598 "block_size": 512, 00:14:28.598 "num_blocks": 65536, 00:14:28.598 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:28.598 "assigned_rate_limits": { 00:14:28.598 "rw_ios_per_sec": 0, 00:14:28.598 "rw_mbytes_per_sec": 0, 00:14:28.598 "r_mbytes_per_sec": 0, 00:14:28.598 "w_mbytes_per_sec": 0 00:14:28.598 }, 00:14:28.598 "claimed": true, 00:14:28.598 "claim_type": "exclusive_write", 00:14:28.598 "zoned": false, 00:14:28.598 "supported_io_types": { 00:14:28.598 "read": true, 00:14:28.598 "write": true, 00:14:28.598 "unmap": true, 00:14:28.598 "flush": true, 00:14:28.598 "reset": true, 00:14:28.598 "nvme_admin": false, 00:14:28.598 "nvme_io": false, 00:14:28.598 "nvme_io_md": false, 00:14:28.598 "write_zeroes": true, 00:14:28.598 "zcopy": true, 00:14:28.598 "get_zone_info": false, 00:14:28.598 "zone_management": false, 00:14:28.598 "zone_append": false, 00:14:28.598 "compare": false, 00:14:28.598 "compare_and_write": false, 00:14:28.598 "abort": true, 00:14:28.598 "seek_hole": false, 00:14:28.598 "seek_data": false, 00:14:28.598 "copy": true, 00:14:28.598 "nvme_iov_md": false 00:14:28.598 }, 00:14:28.598 "memory_domains": [ 00:14:28.598 { 00:14:28.598 "dma_device_id": "system", 00:14:28.598 "dma_device_type": 1 00:14:28.598 }, 00:14:28.598 { 00:14:28.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.598 "dma_device_type": 2 00:14:28.598 } 00:14:28.598 ], 00:14:28.598 "driver_specific": {} 00:14:28.598 } 00:14:28.598 ] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.598 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.599 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.599 "name": "Existed_Raid", 00:14:28.599 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:28.599 "strip_size_kb": 64, 00:14:28.599 "state": "online", 00:14:28.599 "raid_level": "raid5f", 00:14:28.599 "superblock": true, 00:14:28.599 "num_base_bdevs": 4, 00:14:28.599 "num_base_bdevs_discovered": 4, 00:14:28.599 "num_base_bdevs_operational": 4, 00:14:28.599 "base_bdevs_list": [ 00:14:28.599 { 00:14:28.599 "name": "NewBaseBdev", 00:14:28.599 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:28.599 "is_configured": true, 00:14:28.599 "data_offset": 2048, 00:14:28.599 "data_size": 63488 00:14:28.599 }, 00:14:28.599 { 00:14:28.599 "name": "BaseBdev2", 00:14:28.599 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:28.599 "is_configured": true, 00:14:28.599 "data_offset": 2048, 00:14:28.599 "data_size": 63488 00:14:28.599 }, 00:14:28.599 { 00:14:28.599 "name": "BaseBdev3", 00:14:28.599 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:28.599 "is_configured": true, 00:14:28.599 "data_offset": 2048, 00:14:28.599 "data_size": 63488 00:14:28.599 }, 00:14:28.599 { 00:14:28.599 "name": "BaseBdev4", 00:14:28.599 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:28.599 "is_configured": true, 00:14:28.599 "data_offset": 2048, 00:14:28.599 "data_size": 63488 00:14:28.599 } 00:14:28.599 ] 00:14:28.599 }' 00:14:28.599 15:30:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.599 15:30:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.169 [2024-11-26 15:30:27.355645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.169 "name": "Existed_Raid", 00:14:29.169 "aliases": [ 00:14:29.169 "ad14655b-16dd-4743-9288-ede92ce51c1e" 00:14:29.169 ], 00:14:29.169 "product_name": "Raid Volume", 00:14:29.169 "block_size": 512, 00:14:29.169 "num_blocks": 190464, 00:14:29.169 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:29.169 "assigned_rate_limits": { 00:14:29.169 "rw_ios_per_sec": 0, 00:14:29.169 "rw_mbytes_per_sec": 0, 00:14:29.169 "r_mbytes_per_sec": 0, 00:14:29.169 "w_mbytes_per_sec": 0 00:14:29.169 }, 00:14:29.169 "claimed": false, 00:14:29.169 "zoned": false, 00:14:29.169 "supported_io_types": { 00:14:29.169 "read": true, 00:14:29.169 "write": true, 00:14:29.169 "unmap": false, 00:14:29.169 "flush": false, 00:14:29.169 "reset": true, 00:14:29.169 "nvme_admin": false, 00:14:29.169 "nvme_io": false, 00:14:29.169 "nvme_io_md": false, 00:14:29.169 "write_zeroes": true, 00:14:29.169 "zcopy": false, 00:14:29.169 "get_zone_info": false, 00:14:29.169 "zone_management": false, 00:14:29.169 "zone_append": false, 00:14:29.169 "compare": false, 00:14:29.169 "compare_and_write": false, 00:14:29.169 "abort": false, 00:14:29.169 "seek_hole": false, 00:14:29.169 "seek_data": false, 00:14:29.169 "copy": false, 00:14:29.169 "nvme_iov_md": false 00:14:29.169 }, 00:14:29.169 "driver_specific": { 00:14:29.169 "raid": { 00:14:29.169 "uuid": "ad14655b-16dd-4743-9288-ede92ce51c1e", 00:14:29.169 "strip_size_kb": 64, 00:14:29.169 "state": "online", 00:14:29.169 "raid_level": "raid5f", 00:14:29.169 "superblock": true, 00:14:29.169 "num_base_bdevs": 4, 00:14:29.169 "num_base_bdevs_discovered": 4, 00:14:29.169 "num_base_bdevs_operational": 4, 00:14:29.169 "base_bdevs_list": [ 00:14:29.169 { 00:14:29.169 "name": "NewBaseBdev", 00:14:29.169 "uuid": "b1678ba6-fe69-4fd4-ba05-d0cb117d44ab", 00:14:29.169 "is_configured": true, 00:14:29.169 "data_offset": 2048, 00:14:29.169 "data_size": 63488 00:14:29.169 }, 00:14:29.169 { 00:14:29.169 "name": "BaseBdev2", 00:14:29.169 "uuid": "fe530176-bb50-4caf-96e1-8456a821ad51", 00:14:29.169 "is_configured": true, 00:14:29.169 "data_offset": 2048, 00:14:29.169 "data_size": 63488 00:14:29.169 }, 00:14:29.169 { 00:14:29.169 "name": "BaseBdev3", 00:14:29.169 "uuid": "851ea8b1-8406-43f9-8935-6cb52f3586a8", 00:14:29.169 "is_configured": true, 00:14:29.169 "data_offset": 2048, 00:14:29.169 "data_size": 63488 00:14:29.169 }, 00:14:29.169 { 00:14:29.169 "name": "BaseBdev4", 00:14:29.169 "uuid": "22c1a6e2-35ce-455c-89f8-b76e51f52b46", 00:14:29.169 "is_configured": true, 00:14:29.169 "data_offset": 2048, 00:14:29.169 "data_size": 63488 00:14:29.169 } 00:14:29.169 ] 00:14:29.169 } 00:14:29.169 } 00:14:29.169 }' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:29.169 BaseBdev2 00:14:29.169 BaseBdev3 00:14:29.169 BaseBdev4' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.169 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.429 [2024-11-26 15:30:27.707521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.429 [2024-11-26 15:30:27.707588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.429 [2024-11-26 15:30:27.707665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.429 [2024-11-26 15:30:27.707941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.429 [2024-11-26 15:30:27.707959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95397 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95397 ']' 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 95397 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:29.429 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.430 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95397 00:14:29.430 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.430 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.430 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95397' 00:14:29.430 killing process with pid 95397 00:14:29.430 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 95397 00:14:29.430 [2024-11-26 15:30:27.754026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.430 15:30:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 95397 00:14:29.430 [2024-11-26 15:30:27.795363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.690 15:30:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:29.690 00:14:29.690 real 0m9.550s 00:14:29.690 user 0m16.319s 00:14:29.690 sys 0m2.080s 00:14:29.690 15:30:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.690 15:30:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.690 ************************************ 00:14:29.690 END TEST raid5f_state_function_test_sb 00:14:29.690 ************************************ 00:14:29.690 15:30:28 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:29.690 15:30:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:29.690 15:30:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.690 15:30:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.690 ************************************ 00:14:29.690 START TEST raid5f_superblock_test 00:14:29.690 ************************************ 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=96051 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 96051 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 96051 ']' 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.690 15:30:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.950 [2024-11-26 15:30:28.180342] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:29.950 [2024-11-26 15:30:28.180474] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96051 ] 00:14:29.950 [2024-11-26 15:30:28.315086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:29.950 [2024-11-26 15:30:28.354132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.950 [2024-11-26 15:30:28.381529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.210 [2024-11-26 15:30:28.424707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.210 [2024-11-26 15:30:28.424742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:30.780 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 malloc1 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 [2024-11-26 15:30:29.035902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:30.781 [2024-11-26 15:30:29.036002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.781 [2024-11-26 15:30:29.036046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:30.781 [2024-11-26 15:30:29.036085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.781 [2024-11-26 15:30:29.038207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.781 [2024-11-26 15:30:29.038284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:30.781 pt1 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 malloc2 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 [2024-11-26 15:30:29.068511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.781 [2024-11-26 15:30:29.068607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.781 [2024-11-26 15:30:29.068629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:30.781 [2024-11-26 15:30:29.068637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.781 [2024-11-26 15:30:29.070650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.781 [2024-11-26 15:30:29.070687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.781 pt2 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 malloc3 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 [2024-11-26 15:30:29.097105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:30.781 [2024-11-26 15:30:29.097198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.781 [2024-11-26 15:30:29.097237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:30.781 [2024-11-26 15:30:29.097266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.781 [2024-11-26 15:30:29.099291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.781 [2024-11-26 15:30:29.099354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:30.781 pt3 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 malloc4 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.781 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.781 [2024-11-26 15:30:29.138888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:30.781 [2024-11-26 15:30:29.138970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.781 [2024-11-26 15:30:29.139024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:30.781 [2024-11-26 15:30:29.139050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.781 [2024-11-26 15:30:29.141043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.781 [2024-11-26 15:30:29.141111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:30.781 pt4 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.782 [2024-11-26 15:30:29.150938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:30.782 [2024-11-26 15:30:29.152738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.782 [2024-11-26 15:30:29.152837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:30.782 [2024-11-26 15:30:29.152933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:30.782 [2024-11-26 15:30:29.153126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:30.782 [2024-11-26 15:30:29.153168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:30.782 [2024-11-26 15:30:29.153421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:30.782 [2024-11-26 15:30:29.153885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:30.782 [2024-11-26 15:30:29.153935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:30.782 [2024-11-26 15:30:29.154080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.782 "name": "raid_bdev1", 00:14:30.782 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:30.782 "strip_size_kb": 64, 00:14:30.782 "state": "online", 00:14:30.782 "raid_level": "raid5f", 00:14:30.782 "superblock": true, 00:14:30.782 "num_base_bdevs": 4, 00:14:30.782 "num_base_bdevs_discovered": 4, 00:14:30.782 "num_base_bdevs_operational": 4, 00:14:30.782 "base_bdevs_list": [ 00:14:30.782 { 00:14:30.782 "name": "pt1", 00:14:30.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.782 "is_configured": true, 00:14:30.782 "data_offset": 2048, 00:14:30.782 "data_size": 63488 00:14:30.782 }, 00:14:30.782 { 00:14:30.782 "name": "pt2", 00:14:30.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.782 "is_configured": true, 00:14:30.782 "data_offset": 2048, 00:14:30.782 "data_size": 63488 00:14:30.782 }, 00:14:30.782 { 00:14:30.782 "name": "pt3", 00:14:30.782 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.782 "is_configured": true, 00:14:30.782 "data_offset": 2048, 00:14:30.782 "data_size": 63488 00:14:30.782 }, 00:14:30.782 { 00:14:30.782 "name": "pt4", 00:14:30.782 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.782 "is_configured": true, 00:14:30.782 "data_offset": 2048, 00:14:30.782 "data_size": 63488 00:14:30.782 } 00:14:30.782 ] 00:14:30.782 }' 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.782 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.349 [2024-11-26 15:30:29.628230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.349 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.349 "name": "raid_bdev1", 00:14:31.349 "aliases": [ 00:14:31.349 "2d3ebb76-6f50-45e6-8976-666318062c87" 00:14:31.349 ], 00:14:31.349 "product_name": "Raid Volume", 00:14:31.349 "block_size": 512, 00:14:31.349 "num_blocks": 190464, 00:14:31.349 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:31.349 "assigned_rate_limits": { 00:14:31.349 "rw_ios_per_sec": 0, 00:14:31.349 "rw_mbytes_per_sec": 0, 00:14:31.349 "r_mbytes_per_sec": 0, 00:14:31.349 "w_mbytes_per_sec": 0 00:14:31.349 }, 00:14:31.349 "claimed": false, 00:14:31.349 "zoned": false, 00:14:31.349 "supported_io_types": { 00:14:31.349 "read": true, 00:14:31.349 "write": true, 00:14:31.349 "unmap": false, 00:14:31.349 "flush": false, 00:14:31.349 "reset": true, 00:14:31.349 "nvme_admin": false, 00:14:31.349 "nvme_io": false, 00:14:31.349 "nvme_io_md": false, 00:14:31.349 "write_zeroes": true, 00:14:31.349 "zcopy": false, 00:14:31.349 "get_zone_info": false, 00:14:31.349 "zone_management": false, 00:14:31.349 "zone_append": false, 00:14:31.349 "compare": false, 00:14:31.349 "compare_and_write": false, 00:14:31.349 "abort": false, 00:14:31.349 "seek_hole": false, 00:14:31.349 "seek_data": false, 00:14:31.349 "copy": false, 00:14:31.349 "nvme_iov_md": false 00:14:31.349 }, 00:14:31.349 "driver_specific": { 00:14:31.349 "raid": { 00:14:31.349 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:31.349 "strip_size_kb": 64, 00:14:31.349 "state": "online", 00:14:31.349 "raid_level": "raid5f", 00:14:31.349 "superblock": true, 00:14:31.349 "num_base_bdevs": 4, 00:14:31.349 "num_base_bdevs_discovered": 4, 00:14:31.349 "num_base_bdevs_operational": 4, 00:14:31.349 "base_bdevs_list": [ 00:14:31.349 { 00:14:31.349 "name": "pt1", 00:14:31.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.349 "is_configured": true, 00:14:31.349 "data_offset": 2048, 00:14:31.349 "data_size": 63488 00:14:31.349 }, 00:14:31.349 { 00:14:31.349 "name": "pt2", 00:14:31.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.349 "is_configured": true, 00:14:31.349 "data_offset": 2048, 00:14:31.349 "data_size": 63488 00:14:31.349 }, 00:14:31.349 { 00:14:31.349 "name": "pt3", 00:14:31.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.349 "is_configured": true, 00:14:31.349 "data_offset": 2048, 00:14:31.349 "data_size": 63488 00:14:31.349 }, 00:14:31.349 { 00:14:31.349 "name": "pt4", 00:14:31.349 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.349 "is_configured": true, 00:14:31.349 "data_offset": 2048, 00:14:31.349 "data_size": 63488 00:14:31.349 } 00:14:31.349 ] 00:14:31.349 } 00:14:31.349 } 00:14:31.349 }' 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:31.350 pt2 00:14:31.350 pt3 00:14:31.350 pt4' 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.350 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.610 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:31.611 [2024-11-26 15:30:29.956123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2d3ebb76-6f50-45e6-8976-666318062c87 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2d3ebb76-6f50-45e6-8976-666318062c87 ']' 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.611 15:30:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.611 [2024-11-26 15:30:30.003969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.611 [2024-11-26 15:30:30.003996] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.611 [2024-11-26 15:30:30.004091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.611 [2024-11-26 15:30:30.004207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.611 [2024-11-26 15:30:30.004221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.611 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.871 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 [2024-11-26 15:30:30.172079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:31.872 [2024-11-26 15:30:30.174428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:31.872 [2024-11-26 15:30:30.174473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:31.872 [2024-11-26 15:30:30.174504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:31.872 [2024-11-26 15:30:30.174552] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:31.872 [2024-11-26 15:30:30.174610] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:31.872 [2024-11-26 15:30:30.174629] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:31.872 [2024-11-26 15:30:30.174646] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:31.872 [2024-11-26 15:30:30.174658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.872 [2024-11-26 15:30:30.174669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:14:31.872 request: 00:14:31.872 { 00:14:31.872 "name": "raid_bdev1", 00:14:31.872 "raid_level": "raid5f", 00:14:31.872 "base_bdevs": [ 00:14:31.872 "malloc1", 00:14:31.872 "malloc2", 00:14:31.872 "malloc3", 00:14:31.872 "malloc4" 00:14:31.872 ], 00:14:31.872 "strip_size_kb": 64, 00:14:31.872 "superblock": false, 00:14:31.872 "method": "bdev_raid_create", 00:14:31.872 "req_id": 1 00:14:31.872 } 00:14:31.872 Got JSON-RPC error response 00:14:31.872 response: 00:14:31.872 { 00:14:31.872 "code": -17, 00:14:31.872 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:31.872 } 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 [2024-11-26 15:30:30.228044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.872 [2024-11-26 15:30:30.228139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.872 [2024-11-26 15:30:30.228172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:31.872 [2024-11-26 15:30:30.228227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.872 [2024-11-26 15:30:30.230695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.872 [2024-11-26 15:30:30.230770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.872 [2024-11-26 15:30:30.230862] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:31.872 [2024-11-26 15:30:30.230953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:31.872 pt1 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.872 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.872 "name": "raid_bdev1", 00:14:31.872 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:31.872 "strip_size_kb": 64, 00:14:31.872 "state": "configuring", 00:14:31.872 "raid_level": "raid5f", 00:14:31.872 "superblock": true, 00:14:31.872 "num_base_bdevs": 4, 00:14:31.872 "num_base_bdevs_discovered": 1, 00:14:31.872 "num_base_bdevs_operational": 4, 00:14:31.872 "base_bdevs_list": [ 00:14:31.872 { 00:14:31.872 "name": "pt1", 00:14:31.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.872 "is_configured": true, 00:14:31.872 "data_offset": 2048, 00:14:31.872 "data_size": 63488 00:14:31.872 }, 00:14:31.872 { 00:14:31.872 "name": null, 00:14:31.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.872 "is_configured": false, 00:14:31.872 "data_offset": 2048, 00:14:31.872 "data_size": 63488 00:14:31.872 }, 00:14:31.872 { 00:14:31.872 "name": null, 00:14:31.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.872 "is_configured": false, 00:14:31.872 "data_offset": 2048, 00:14:31.872 "data_size": 63488 00:14:31.872 }, 00:14:31.872 { 00:14:31.872 "name": null, 00:14:31.872 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:31.873 "is_configured": false, 00:14:31.873 "data_offset": 2048, 00:14:31.873 "data_size": 63488 00:14:31.873 } 00:14:31.873 ] 00:14:31.873 }' 00:14:31.873 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.873 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.442 [2024-11-26 15:30:30.692231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.442 [2024-11-26 15:30:30.692404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.442 [2024-11-26 15:30:30.692437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:32.442 [2024-11-26 15:30:30.692453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.442 [2024-11-26 15:30:30.692967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.442 [2024-11-26 15:30:30.692990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.442 [2024-11-26 15:30:30.693077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.442 [2024-11-26 15:30:30.693104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.442 pt2 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.442 [2024-11-26 15:30:30.700185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.442 "name": "raid_bdev1", 00:14:32.442 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:32.442 "strip_size_kb": 64, 00:14:32.442 "state": "configuring", 00:14:32.442 "raid_level": "raid5f", 00:14:32.442 "superblock": true, 00:14:32.442 "num_base_bdevs": 4, 00:14:32.442 "num_base_bdevs_discovered": 1, 00:14:32.442 "num_base_bdevs_operational": 4, 00:14:32.442 "base_bdevs_list": [ 00:14:32.442 { 00:14:32.442 "name": "pt1", 00:14:32.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.442 "is_configured": true, 00:14:32.442 "data_offset": 2048, 00:14:32.442 "data_size": 63488 00:14:32.442 }, 00:14:32.442 { 00:14:32.442 "name": null, 00:14:32.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.442 "is_configured": false, 00:14:32.442 "data_offset": 0, 00:14:32.442 "data_size": 63488 00:14:32.442 }, 00:14:32.442 { 00:14:32.442 "name": null, 00:14:32.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.442 "is_configured": false, 00:14:32.442 "data_offset": 2048, 00:14:32.442 "data_size": 63488 00:14:32.442 }, 00:14:32.442 { 00:14:32.442 "name": null, 00:14:32.442 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:32.442 "is_configured": false, 00:14:32.442 "data_offset": 2048, 00:14:32.442 "data_size": 63488 00:14:32.442 } 00:14:32.442 ] 00:14:32.442 }' 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.442 15:30:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.703 [2024-11-26 15:30:31.140289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.703 [2024-11-26 15:30:31.140443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.703 [2024-11-26 15:30:31.140484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:32.703 [2024-11-26 15:30:31.140513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.703 [2024-11-26 15:30:31.141063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.703 [2024-11-26 15:30:31.141127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.703 [2024-11-26 15:30:31.141270] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.703 [2024-11-26 15:30:31.141328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.703 pt2 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.703 [2024-11-26 15:30:31.152249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:32.703 [2024-11-26 15:30:31.152354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.703 [2024-11-26 15:30:31.152390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:32.703 [2024-11-26 15:30:31.152415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.703 [2024-11-26 15:30:31.152853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.703 [2024-11-26 15:30:31.152908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:32.703 [2024-11-26 15:30:31.153001] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:32.703 [2024-11-26 15:30:31.153049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:32.703 pt3 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.703 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.703 [2024-11-26 15:30:31.164248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:32.703 [2024-11-26 15:30:31.164297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.703 [2024-11-26 15:30:31.164319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:32.703 [2024-11-26 15:30:31.164326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.703 [2024-11-26 15:30:31.164677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.704 [2024-11-26 15:30:31.164694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:32.704 [2024-11-26 15:30:31.164755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:32.704 [2024-11-26 15:30:31.164774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:32.704 [2024-11-26 15:30:31.164904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:32.704 [2024-11-26 15:30:31.164914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:32.704 [2024-11-26 15:30:31.165154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:32.704 [2024-11-26 15:30:31.166806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:32.704 [2024-11-26 15:30:31.166829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:32.704 [2024-11-26 15:30:31.166942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.704 pt4 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.704 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.965 "name": "raid_bdev1", 00:14:32.965 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:32.965 "strip_size_kb": 64, 00:14:32.965 "state": "online", 00:14:32.965 "raid_level": "raid5f", 00:14:32.965 "superblock": true, 00:14:32.965 "num_base_bdevs": 4, 00:14:32.965 "num_base_bdevs_discovered": 4, 00:14:32.965 "num_base_bdevs_operational": 4, 00:14:32.965 "base_bdevs_list": [ 00:14:32.965 { 00:14:32.965 "name": "pt1", 00:14:32.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.965 "is_configured": true, 00:14:32.965 "data_offset": 2048, 00:14:32.965 "data_size": 63488 00:14:32.965 }, 00:14:32.965 { 00:14:32.965 "name": "pt2", 00:14:32.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.965 "is_configured": true, 00:14:32.965 "data_offset": 2048, 00:14:32.965 "data_size": 63488 00:14:32.965 }, 00:14:32.965 { 00:14:32.965 "name": "pt3", 00:14:32.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.965 "is_configured": true, 00:14:32.965 "data_offset": 2048, 00:14:32.965 "data_size": 63488 00:14:32.965 }, 00:14:32.965 { 00:14:32.965 "name": "pt4", 00:14:32.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:32.965 "is_configured": true, 00:14:32.965 "data_offset": 2048, 00:14:32.965 "data_size": 63488 00:14:32.965 } 00:14:32.965 ] 00:14:32.965 }' 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.965 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.226 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:33.226 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:33.226 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.226 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.226 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.226 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.227 [2024-11-26 15:30:31.582100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.227 "name": "raid_bdev1", 00:14:33.227 "aliases": [ 00:14:33.227 "2d3ebb76-6f50-45e6-8976-666318062c87" 00:14:33.227 ], 00:14:33.227 "product_name": "Raid Volume", 00:14:33.227 "block_size": 512, 00:14:33.227 "num_blocks": 190464, 00:14:33.227 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:33.227 "assigned_rate_limits": { 00:14:33.227 "rw_ios_per_sec": 0, 00:14:33.227 "rw_mbytes_per_sec": 0, 00:14:33.227 "r_mbytes_per_sec": 0, 00:14:33.227 "w_mbytes_per_sec": 0 00:14:33.227 }, 00:14:33.227 "claimed": false, 00:14:33.227 "zoned": false, 00:14:33.227 "supported_io_types": { 00:14:33.227 "read": true, 00:14:33.227 "write": true, 00:14:33.227 "unmap": false, 00:14:33.227 "flush": false, 00:14:33.227 "reset": true, 00:14:33.227 "nvme_admin": false, 00:14:33.227 "nvme_io": false, 00:14:33.227 "nvme_io_md": false, 00:14:33.227 "write_zeroes": true, 00:14:33.227 "zcopy": false, 00:14:33.227 "get_zone_info": false, 00:14:33.227 "zone_management": false, 00:14:33.227 "zone_append": false, 00:14:33.227 "compare": false, 00:14:33.227 "compare_and_write": false, 00:14:33.227 "abort": false, 00:14:33.227 "seek_hole": false, 00:14:33.227 "seek_data": false, 00:14:33.227 "copy": false, 00:14:33.227 "nvme_iov_md": false 00:14:33.227 }, 00:14:33.227 "driver_specific": { 00:14:33.227 "raid": { 00:14:33.227 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:33.227 "strip_size_kb": 64, 00:14:33.227 "state": "online", 00:14:33.227 "raid_level": "raid5f", 00:14:33.227 "superblock": true, 00:14:33.227 "num_base_bdevs": 4, 00:14:33.227 "num_base_bdevs_discovered": 4, 00:14:33.227 "num_base_bdevs_operational": 4, 00:14:33.227 "base_bdevs_list": [ 00:14:33.227 { 00:14:33.227 "name": "pt1", 00:14:33.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.227 "is_configured": true, 00:14:33.227 "data_offset": 2048, 00:14:33.227 "data_size": 63488 00:14:33.227 }, 00:14:33.227 { 00:14:33.227 "name": "pt2", 00:14:33.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.227 "is_configured": true, 00:14:33.227 "data_offset": 2048, 00:14:33.227 "data_size": 63488 00:14:33.227 }, 00:14:33.227 { 00:14:33.227 "name": "pt3", 00:14:33.227 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.227 "is_configured": true, 00:14:33.227 "data_offset": 2048, 00:14:33.227 "data_size": 63488 00:14:33.227 }, 00:14:33.227 { 00:14:33.227 "name": "pt4", 00:14:33.227 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:33.227 "is_configured": true, 00:14:33.227 "data_offset": 2048, 00:14:33.227 "data_size": 63488 00:14:33.227 } 00:14:33.227 ] 00:14:33.227 } 00:14:33.227 } 00:14:33.227 }' 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:33.227 pt2 00:14:33.227 pt3 00:14:33.227 pt4' 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:33.227 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.488 [2024-11-26 15:30:31.910154] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2d3ebb76-6f50-45e6-8976-666318062c87 '!=' 2d3ebb76-6f50-45e6-8976-666318062c87 ']' 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.488 [2024-11-26 15:30:31.954047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.488 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.749 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.749 15:30:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.749 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.749 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.749 15:30:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.749 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.749 "name": "raid_bdev1", 00:14:33.749 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:33.749 "strip_size_kb": 64, 00:14:33.749 "state": "online", 00:14:33.749 "raid_level": "raid5f", 00:14:33.749 "superblock": true, 00:14:33.749 "num_base_bdevs": 4, 00:14:33.749 "num_base_bdevs_discovered": 3, 00:14:33.749 "num_base_bdevs_operational": 3, 00:14:33.749 "base_bdevs_list": [ 00:14:33.749 { 00:14:33.749 "name": null, 00:14:33.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.749 "is_configured": false, 00:14:33.749 "data_offset": 0, 00:14:33.749 "data_size": 63488 00:14:33.749 }, 00:14:33.749 { 00:14:33.749 "name": "pt2", 00:14:33.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.749 "is_configured": true, 00:14:33.749 "data_offset": 2048, 00:14:33.749 "data_size": 63488 00:14:33.749 }, 00:14:33.749 { 00:14:33.749 "name": "pt3", 00:14:33.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.749 "is_configured": true, 00:14:33.749 "data_offset": 2048, 00:14:33.749 "data_size": 63488 00:14:33.749 }, 00:14:33.749 { 00:14:33.749 "name": "pt4", 00:14:33.749 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:33.749 "is_configured": true, 00:14:33.749 "data_offset": 2048, 00:14:33.749 "data_size": 63488 00:14:33.749 } 00:14:33.749 ] 00:14:33.749 }' 00:14:33.749 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.749 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.009 [2024-11-26 15:30:32.402128] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.009 [2024-11-26 15:30:32.402173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.009 [2024-11-26 15:30:32.402315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.009 [2024-11-26 15:30:32.402402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.009 [2024-11-26 15:30:32.402426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.009 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.270 [2024-11-26 15:30:32.502093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.270 [2024-11-26 15:30:32.502145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.270 [2024-11-26 15:30:32.502164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:34.270 [2024-11-26 15:30:32.502173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.270 [2024-11-26 15:30:32.504723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.270 [2024-11-26 15:30:32.504758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.270 [2024-11-26 15:30:32.504837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:34.270 [2024-11-26 15:30:32.504875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.270 pt2 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.270 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.271 "name": "raid_bdev1", 00:14:34.271 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:34.271 "strip_size_kb": 64, 00:14:34.271 "state": "configuring", 00:14:34.271 "raid_level": "raid5f", 00:14:34.271 "superblock": true, 00:14:34.271 "num_base_bdevs": 4, 00:14:34.271 "num_base_bdevs_discovered": 1, 00:14:34.271 "num_base_bdevs_operational": 3, 00:14:34.271 "base_bdevs_list": [ 00:14:34.271 { 00:14:34.271 "name": null, 00:14:34.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.271 "is_configured": false, 00:14:34.271 "data_offset": 2048, 00:14:34.271 "data_size": 63488 00:14:34.271 }, 00:14:34.271 { 00:14:34.271 "name": "pt2", 00:14:34.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.271 "is_configured": true, 00:14:34.271 "data_offset": 2048, 00:14:34.271 "data_size": 63488 00:14:34.271 }, 00:14:34.271 { 00:14:34.271 "name": null, 00:14:34.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.271 "is_configured": false, 00:14:34.271 "data_offset": 2048, 00:14:34.271 "data_size": 63488 00:14:34.271 }, 00:14:34.271 { 00:14:34.271 "name": null, 00:14:34.271 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:34.271 "is_configured": false, 00:14:34.271 "data_offset": 2048, 00:14:34.271 "data_size": 63488 00:14:34.271 } 00:14:34.271 ] 00:14:34.271 }' 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.271 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.531 [2024-11-26 15:30:32.886224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:34.531 [2024-11-26 15:30:32.886330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.531 [2024-11-26 15:30:32.886370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:34.531 [2024-11-26 15:30:32.886398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.531 [2024-11-26 15:30:32.886815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.531 [2024-11-26 15:30:32.886877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:34.531 [2024-11-26 15:30:32.886981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:34.531 [2024-11-26 15:30:32.887037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:34.531 pt3 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.531 "name": "raid_bdev1", 00:14:34.531 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:34.531 "strip_size_kb": 64, 00:14:34.531 "state": "configuring", 00:14:34.531 "raid_level": "raid5f", 00:14:34.531 "superblock": true, 00:14:34.531 "num_base_bdevs": 4, 00:14:34.531 "num_base_bdevs_discovered": 2, 00:14:34.531 "num_base_bdevs_operational": 3, 00:14:34.531 "base_bdevs_list": [ 00:14:34.531 { 00:14:34.531 "name": null, 00:14:34.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.531 "is_configured": false, 00:14:34.531 "data_offset": 2048, 00:14:34.531 "data_size": 63488 00:14:34.531 }, 00:14:34.531 { 00:14:34.531 "name": "pt2", 00:14:34.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.531 "is_configured": true, 00:14:34.531 "data_offset": 2048, 00:14:34.531 "data_size": 63488 00:14:34.531 }, 00:14:34.531 { 00:14:34.531 "name": "pt3", 00:14:34.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.531 "is_configured": true, 00:14:34.531 "data_offset": 2048, 00:14:34.531 "data_size": 63488 00:14:34.531 }, 00:14:34.531 { 00:14:34.531 "name": null, 00:14:34.531 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:34.531 "is_configured": false, 00:14:34.531 "data_offset": 2048, 00:14:34.531 "data_size": 63488 00:14:34.531 } 00:14:34.531 ] 00:14:34.531 }' 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.531 15:30:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.102 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:35.102 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:35.102 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:35.102 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:35.102 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.102 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.102 [2024-11-26 15:30:33.282322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:35.102 [2024-11-26 15:30:33.282381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.102 [2024-11-26 15:30:33.282403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:35.103 [2024-11-26 15:30:33.282413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.103 [2024-11-26 15:30:33.282862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.103 [2024-11-26 15:30:33.282879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:35.103 [2024-11-26 15:30:33.282955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:35.103 [2024-11-26 15:30:33.282984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:35.103 [2024-11-26 15:30:33.283114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:35.103 [2024-11-26 15:30:33.283124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:35.103 [2024-11-26 15:30:33.283400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:35.103 [2024-11-26 15:30:33.283986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:35.103 [2024-11-26 15:30:33.284010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:35.103 [2024-11-26 15:30:33.284261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.103 pt4 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.103 "name": "raid_bdev1", 00:14:35.103 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:35.103 "strip_size_kb": 64, 00:14:35.103 "state": "online", 00:14:35.103 "raid_level": "raid5f", 00:14:35.103 "superblock": true, 00:14:35.103 "num_base_bdevs": 4, 00:14:35.103 "num_base_bdevs_discovered": 3, 00:14:35.103 "num_base_bdevs_operational": 3, 00:14:35.103 "base_bdevs_list": [ 00:14:35.103 { 00:14:35.103 "name": null, 00:14:35.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.103 "is_configured": false, 00:14:35.103 "data_offset": 2048, 00:14:35.103 "data_size": 63488 00:14:35.103 }, 00:14:35.103 { 00:14:35.103 "name": "pt2", 00:14:35.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.103 "is_configured": true, 00:14:35.103 "data_offset": 2048, 00:14:35.103 "data_size": 63488 00:14:35.103 }, 00:14:35.103 { 00:14:35.103 "name": "pt3", 00:14:35.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.103 "is_configured": true, 00:14:35.103 "data_offset": 2048, 00:14:35.103 "data_size": 63488 00:14:35.103 }, 00:14:35.103 { 00:14:35.103 "name": "pt4", 00:14:35.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.103 "is_configured": true, 00:14:35.103 "data_offset": 2048, 00:14:35.103 "data_size": 63488 00:14:35.103 } 00:14:35.103 ] 00:14:35.103 }' 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.103 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 [2024-11-26 15:30:33.747165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.364 [2024-11-26 15:30:33.747293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.364 [2024-11-26 15:30:33.747431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.364 [2024-11-26 15:30:33.747539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.364 [2024-11-26 15:30:33.747593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.364 [2024-11-26 15:30:33.819135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:35.364 [2024-11-26 15:30:33.819222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.364 [2024-11-26 15:30:33.819243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:35.364 [2024-11-26 15:30:33.819256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.364 [2024-11-26 15:30:33.821975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.364 [2024-11-26 15:30:33.822016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:35.364 [2024-11-26 15:30:33.822100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:35.364 [2024-11-26 15:30:33.822149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:35.364 [2024-11-26 15:30:33.822299] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:35.364 [2024-11-26 15:30:33.822317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.364 [2024-11-26 15:30:33.822336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:14:35.364 [2024-11-26 15:30:33.822381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:35.364 [2024-11-26 15:30:33.822482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:35.364 pt1 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.364 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.624 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.624 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.624 "name": "raid_bdev1", 00:14:35.624 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:35.624 "strip_size_kb": 64, 00:14:35.624 "state": "configuring", 00:14:35.624 "raid_level": "raid5f", 00:14:35.624 "superblock": true, 00:14:35.624 "num_base_bdevs": 4, 00:14:35.624 "num_base_bdevs_discovered": 2, 00:14:35.624 "num_base_bdevs_operational": 3, 00:14:35.624 "base_bdevs_list": [ 00:14:35.624 { 00:14:35.624 "name": null, 00:14:35.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.624 "is_configured": false, 00:14:35.624 "data_offset": 2048, 00:14:35.624 "data_size": 63488 00:14:35.624 }, 00:14:35.624 { 00:14:35.624 "name": "pt2", 00:14:35.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.624 "is_configured": true, 00:14:35.624 "data_offset": 2048, 00:14:35.624 "data_size": 63488 00:14:35.624 }, 00:14:35.624 { 00:14:35.624 "name": "pt3", 00:14:35.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.624 "is_configured": true, 00:14:35.624 "data_offset": 2048, 00:14:35.624 "data_size": 63488 00:14:35.624 }, 00:14:35.624 { 00:14:35.624 "name": null, 00:14:35.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.624 "is_configured": false, 00:14:35.624 "data_offset": 2048, 00:14:35.624 "data_size": 63488 00:14:35.624 } 00:14:35.624 ] 00:14:35.624 }' 00:14:35.624 15:30:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.624 15:30:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.885 [2024-11-26 15:30:34.343263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:35.885 [2024-11-26 15:30:34.343403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.885 [2024-11-26 15:30:34.343446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:35.885 [2024-11-26 15:30:34.343480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.885 [2024-11-26 15:30:34.344017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.885 [2024-11-26 15:30:34.344079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:35.885 [2024-11-26 15:30:34.344209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:35.885 [2024-11-26 15:30:34.344267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:35.885 [2024-11-26 15:30:34.344437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:35.885 [2024-11-26 15:30:34.344476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:35.885 [2024-11-26 15:30:34.344803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:35.885 [2024-11-26 15:30:34.345483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:35.885 [2024-11-26 15:30:34.345543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:35.885 [2024-11-26 15:30:34.345785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.885 pt4 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.885 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.145 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.145 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.145 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.145 "name": "raid_bdev1", 00:14:36.145 "uuid": "2d3ebb76-6f50-45e6-8976-666318062c87", 00:14:36.145 "strip_size_kb": 64, 00:14:36.145 "state": "online", 00:14:36.145 "raid_level": "raid5f", 00:14:36.145 "superblock": true, 00:14:36.145 "num_base_bdevs": 4, 00:14:36.145 "num_base_bdevs_discovered": 3, 00:14:36.145 "num_base_bdevs_operational": 3, 00:14:36.145 "base_bdevs_list": [ 00:14:36.145 { 00:14:36.145 "name": null, 00:14:36.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.145 "is_configured": false, 00:14:36.145 "data_offset": 2048, 00:14:36.145 "data_size": 63488 00:14:36.145 }, 00:14:36.145 { 00:14:36.145 "name": "pt2", 00:14:36.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.145 "is_configured": true, 00:14:36.145 "data_offset": 2048, 00:14:36.145 "data_size": 63488 00:14:36.145 }, 00:14:36.145 { 00:14:36.145 "name": "pt3", 00:14:36.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.145 "is_configured": true, 00:14:36.145 "data_offset": 2048, 00:14:36.145 "data_size": 63488 00:14:36.145 }, 00:14:36.145 { 00:14:36.145 "name": "pt4", 00:14:36.145 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.145 "is_configured": true, 00:14:36.145 "data_offset": 2048, 00:14:36.145 "data_size": 63488 00:14:36.145 } 00:14:36.145 ] 00:14:36.145 }' 00:14:36.145 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.145 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:36.405 [2024-11-26 15:30:34.840826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.405 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.665 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2d3ebb76-6f50-45e6-8976-666318062c87 '!=' 2d3ebb76-6f50-45e6-8976-666318062c87 ']' 00:14:36.665 15:30:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 96051 00:14:36.665 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 96051 ']' 00:14:36.665 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 96051 00:14:36.665 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:36.666 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.666 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96051 00:14:36.666 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.666 killing process with pid 96051 00:14:36.666 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.666 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96051' 00:14:36.666 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 96051 00:14:36.666 [2024-11-26 15:30:34.911863] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.666 [2024-11-26 15:30:34.911986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.666 15:30:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 96051 00:14:36.666 [2024-11-26 15:30:34.912076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.666 [2024-11-26 15:30:34.912089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:36.666 [2024-11-26 15:30:34.992308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.925 15:30:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:36.925 00:14:36.925 real 0m7.229s 00:14:36.925 user 0m12.051s 00:14:36.925 sys 0m1.532s 00:14:36.925 15:30:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.925 ************************************ 00:14:36.925 END TEST raid5f_superblock_test 00:14:36.925 ************************************ 00:14:36.925 15:30:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.925 15:30:35 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:36.925 15:30:35 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:36.925 15:30:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:36.925 15:30:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.925 15:30:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.925 ************************************ 00:14:36.925 START TEST raid5f_rebuild_test 00:14:36.925 ************************************ 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.925 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=96521 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 96521 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 96521 ']' 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.186 15:30:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.186 [2024-11-26 15:30:35.486174] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:37.186 [2024-11-26 15:30:35.486377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:37.186 Zero copy mechanism will not be used. 00:14:37.186 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96521 ] 00:14:37.186 [2024-11-26 15:30:35.620802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:37.186 [2024-11-26 15:30:35.656308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.446 [2024-11-26 15:30:35.695748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.446 [2024-11-26 15:30:35.772074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.446 [2024-11-26 15:30:35.772118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.017 BaseBdev1_malloc 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.017 [2024-11-26 15:30:36.339503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:38.017 [2024-11-26 15:30:36.339666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.017 [2024-11-26 15:30:36.339705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:38.017 [2024-11-26 15:30:36.339721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.017 [2024-11-26 15:30:36.342183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.017 [2024-11-26 15:30:36.342230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.017 BaseBdev1 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.017 BaseBdev2_malloc 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.017 [2024-11-26 15:30:36.366227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:38.017 [2024-11-26 15:30:36.366293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.017 [2024-11-26 15:30:36.366311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:38.017 [2024-11-26 15:30:36.366322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.017 [2024-11-26 15:30:36.368673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.017 [2024-11-26 15:30:36.368710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:38.017 BaseBdev2 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.017 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 BaseBdev3_malloc 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 [2024-11-26 15:30:36.392955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:38.018 [2024-11-26 15:30:36.393021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.018 [2024-11-26 15:30:36.393042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:38.018 [2024-11-26 15:30:36.393053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.018 [2024-11-26 15:30:36.395456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.018 [2024-11-26 15:30:36.395493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:38.018 BaseBdev3 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 BaseBdev4_malloc 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 [2024-11-26 15:30:36.430985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:38.018 [2024-11-26 15:30:36.431143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.018 [2024-11-26 15:30:36.431169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:38.018 [2024-11-26 15:30:36.431194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.018 [2024-11-26 15:30:36.433599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.018 [2024-11-26 15:30:36.433634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:38.018 BaseBdev4 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 spare_malloc 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 spare_delay 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 [2024-11-26 15:30:36.465581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:38.018 [2024-11-26 15:30:36.465645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.018 [2024-11-26 15:30:36.465670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:38.018 [2024-11-26 15:30:36.465683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.018 [2024-11-26 15:30:36.468032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.018 [2024-11-26 15:30:36.468068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:38.018 spare 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.018 [2024-11-26 15:30:36.473671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.018 [2024-11-26 15:30:36.475798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.018 [2024-11-26 15:30:36.475935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.018 [2024-11-26 15:30:36.475982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:38.018 [2024-11-26 15:30:36.476063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:38.018 [2024-11-26 15:30:36.476078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:38.018 [2024-11-26 15:30:36.476340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:38.018 [2024-11-26 15:30:36.476861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:38.018 [2024-11-26 15:30:36.476878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:38.018 [2024-11-26 15:30:36.476991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.018 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.279 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.279 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.279 "name": "raid_bdev1", 00:14:38.279 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:38.279 "strip_size_kb": 64, 00:14:38.279 "state": "online", 00:14:38.279 "raid_level": "raid5f", 00:14:38.279 "superblock": false, 00:14:38.279 "num_base_bdevs": 4, 00:14:38.279 "num_base_bdevs_discovered": 4, 00:14:38.279 "num_base_bdevs_operational": 4, 00:14:38.279 "base_bdevs_list": [ 00:14:38.279 { 00:14:38.279 "name": "BaseBdev1", 00:14:38.279 "uuid": "1e737403-60df-530c-ac25-d043f170cb19", 00:14:38.279 "is_configured": true, 00:14:38.279 "data_offset": 0, 00:14:38.279 "data_size": 65536 00:14:38.279 }, 00:14:38.279 { 00:14:38.279 "name": "BaseBdev2", 00:14:38.279 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:38.279 "is_configured": true, 00:14:38.279 "data_offset": 0, 00:14:38.279 "data_size": 65536 00:14:38.279 }, 00:14:38.279 { 00:14:38.279 "name": "BaseBdev3", 00:14:38.279 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:38.279 "is_configured": true, 00:14:38.279 "data_offset": 0, 00:14:38.279 "data_size": 65536 00:14:38.279 }, 00:14:38.279 { 00:14:38.279 "name": "BaseBdev4", 00:14:38.279 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:38.279 "is_configured": true, 00:14:38.279 "data_offset": 0, 00:14:38.279 "data_size": 65536 00:14:38.279 } 00:14:38.279 ] 00:14:38.279 }' 00:14:38.279 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.279 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:38.539 [2024-11-26 15:30:36.952197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.539 15:30:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.539 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:38.800 [2024-11-26 15:30:37.192111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:38.800 /dev/nbd0 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.800 1+0 records in 00:14:38.800 1+0 records out 00:14:38.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424234 s, 9.7 MB/s 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:38.800 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:39.370 512+0 records in 00:14:39.370 512+0 records out 00:14:39.370 100663296 bytes (101 MB, 96 MiB) copied, 0.504119 s, 200 MB/s 00:14:39.370 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:39.370 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.370 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:39.370 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.370 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:39.370 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.370 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.631 [2024-11-26 15:30:37.966957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.631 [2024-11-26 15:30:37.981840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.631 15:30:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.631 15:30:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.631 15:30:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.631 "name": "raid_bdev1", 00:14:39.631 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:39.631 "strip_size_kb": 64, 00:14:39.631 "state": "online", 00:14:39.631 "raid_level": "raid5f", 00:14:39.631 "superblock": false, 00:14:39.631 "num_base_bdevs": 4, 00:14:39.631 "num_base_bdevs_discovered": 3, 00:14:39.631 "num_base_bdevs_operational": 3, 00:14:39.631 "base_bdevs_list": [ 00:14:39.631 { 00:14:39.631 "name": null, 00:14:39.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.631 "is_configured": false, 00:14:39.631 "data_offset": 0, 00:14:39.631 "data_size": 65536 00:14:39.631 }, 00:14:39.631 { 00:14:39.631 "name": "BaseBdev2", 00:14:39.631 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:39.631 "is_configured": true, 00:14:39.631 "data_offset": 0, 00:14:39.631 "data_size": 65536 00:14:39.631 }, 00:14:39.631 { 00:14:39.631 "name": "BaseBdev3", 00:14:39.631 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:39.631 "is_configured": true, 00:14:39.631 "data_offset": 0, 00:14:39.631 "data_size": 65536 00:14:39.631 }, 00:14:39.631 { 00:14:39.631 "name": "BaseBdev4", 00:14:39.631 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:39.631 "is_configured": true, 00:14:39.631 "data_offset": 0, 00:14:39.631 "data_size": 65536 00:14:39.631 } 00:14:39.631 ] 00:14:39.631 }' 00:14:39.631 15:30:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.631 15:30:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.223 15:30:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.223 15:30:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.223 15:30:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.223 [2024-11-26 15:30:38.422012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.223 [2024-11-26 15:30:38.429475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:14:40.223 15:30:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.224 15:30:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:40.224 [2024-11-26 15:30:38.432027] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.164 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.164 "name": "raid_bdev1", 00:14:41.164 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:41.164 "strip_size_kb": 64, 00:14:41.164 "state": "online", 00:14:41.164 "raid_level": "raid5f", 00:14:41.164 "superblock": false, 00:14:41.164 "num_base_bdevs": 4, 00:14:41.164 "num_base_bdevs_discovered": 4, 00:14:41.164 "num_base_bdevs_operational": 4, 00:14:41.164 "process": { 00:14:41.164 "type": "rebuild", 00:14:41.164 "target": "spare", 00:14:41.164 "progress": { 00:14:41.164 "blocks": 19200, 00:14:41.164 "percent": 9 00:14:41.164 } 00:14:41.164 }, 00:14:41.164 "base_bdevs_list": [ 00:14:41.164 { 00:14:41.164 "name": "spare", 00:14:41.164 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:41.164 "is_configured": true, 00:14:41.164 "data_offset": 0, 00:14:41.164 "data_size": 65536 00:14:41.164 }, 00:14:41.164 { 00:14:41.165 "name": "BaseBdev2", 00:14:41.165 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:41.165 "is_configured": true, 00:14:41.165 "data_offset": 0, 00:14:41.165 "data_size": 65536 00:14:41.165 }, 00:14:41.165 { 00:14:41.165 "name": "BaseBdev3", 00:14:41.165 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:41.165 "is_configured": true, 00:14:41.165 "data_offset": 0, 00:14:41.165 "data_size": 65536 00:14:41.165 }, 00:14:41.165 { 00:14:41.165 "name": "BaseBdev4", 00:14:41.165 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:41.165 "is_configured": true, 00:14:41.165 "data_offset": 0, 00:14:41.165 "data_size": 65536 00:14:41.165 } 00:14:41.165 ] 00:14:41.165 }' 00:14:41.165 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.165 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.165 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.165 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.165 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:41.165 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.165 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.165 [2024-11-26 15:30:39.565837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.424 [2024-11-26 15:30:39.642651] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.424 [2024-11-26 15:30:39.642730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.424 [2024-11-26 15:30:39.642749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.424 [2024-11-26 15:30:39.642775] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.424 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.424 "name": "raid_bdev1", 00:14:41.424 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:41.424 "strip_size_kb": 64, 00:14:41.424 "state": "online", 00:14:41.424 "raid_level": "raid5f", 00:14:41.424 "superblock": false, 00:14:41.424 "num_base_bdevs": 4, 00:14:41.424 "num_base_bdevs_discovered": 3, 00:14:41.424 "num_base_bdevs_operational": 3, 00:14:41.424 "base_bdevs_list": [ 00:14:41.424 { 00:14:41.424 "name": null, 00:14:41.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.424 "is_configured": false, 00:14:41.424 "data_offset": 0, 00:14:41.424 "data_size": 65536 00:14:41.424 }, 00:14:41.424 { 00:14:41.424 "name": "BaseBdev2", 00:14:41.424 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:41.424 "is_configured": true, 00:14:41.424 "data_offset": 0, 00:14:41.424 "data_size": 65536 00:14:41.424 }, 00:14:41.424 { 00:14:41.424 "name": "BaseBdev3", 00:14:41.424 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:41.424 "is_configured": true, 00:14:41.424 "data_offset": 0, 00:14:41.424 "data_size": 65536 00:14:41.424 }, 00:14:41.424 { 00:14:41.424 "name": "BaseBdev4", 00:14:41.424 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:41.424 "is_configured": true, 00:14:41.425 "data_offset": 0, 00:14:41.425 "data_size": 65536 00:14:41.425 } 00:14:41.425 ] 00:14:41.425 }' 00:14:41.425 15:30:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.425 15:30:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.683 "name": "raid_bdev1", 00:14:41.683 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:41.683 "strip_size_kb": 64, 00:14:41.683 "state": "online", 00:14:41.683 "raid_level": "raid5f", 00:14:41.683 "superblock": false, 00:14:41.683 "num_base_bdevs": 4, 00:14:41.683 "num_base_bdevs_discovered": 3, 00:14:41.683 "num_base_bdevs_operational": 3, 00:14:41.683 "base_bdevs_list": [ 00:14:41.683 { 00:14:41.683 "name": null, 00:14:41.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.683 "is_configured": false, 00:14:41.683 "data_offset": 0, 00:14:41.683 "data_size": 65536 00:14:41.683 }, 00:14:41.683 { 00:14:41.683 "name": "BaseBdev2", 00:14:41.683 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:41.683 "is_configured": true, 00:14:41.683 "data_offset": 0, 00:14:41.683 "data_size": 65536 00:14:41.683 }, 00:14:41.683 { 00:14:41.683 "name": "BaseBdev3", 00:14:41.683 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:41.683 "is_configured": true, 00:14:41.683 "data_offset": 0, 00:14:41.683 "data_size": 65536 00:14:41.683 }, 00:14:41.683 { 00:14:41.683 "name": "BaseBdev4", 00:14:41.683 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:41.683 "is_configured": true, 00:14:41.683 "data_offset": 0, 00:14:41.683 "data_size": 65536 00:14:41.683 } 00:14:41.683 ] 00:14:41.683 }' 00:14:41.683 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.944 [2024-11-26 15:30:40.224899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.944 [2024-11-26 15:30:40.232219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bc30 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.944 15:30:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:41.944 [2024-11-26 15:30:40.234858] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.883 "name": "raid_bdev1", 00:14:42.883 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:42.883 "strip_size_kb": 64, 00:14:42.883 "state": "online", 00:14:42.883 "raid_level": "raid5f", 00:14:42.883 "superblock": false, 00:14:42.883 "num_base_bdevs": 4, 00:14:42.883 "num_base_bdevs_discovered": 4, 00:14:42.883 "num_base_bdevs_operational": 4, 00:14:42.883 "process": { 00:14:42.883 "type": "rebuild", 00:14:42.883 "target": "spare", 00:14:42.883 "progress": { 00:14:42.883 "blocks": 19200, 00:14:42.883 "percent": 9 00:14:42.883 } 00:14:42.883 }, 00:14:42.883 "base_bdevs_list": [ 00:14:42.883 { 00:14:42.883 "name": "spare", 00:14:42.883 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:42.883 "is_configured": true, 00:14:42.883 "data_offset": 0, 00:14:42.883 "data_size": 65536 00:14:42.883 }, 00:14:42.883 { 00:14:42.883 "name": "BaseBdev2", 00:14:42.883 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:42.883 "is_configured": true, 00:14:42.883 "data_offset": 0, 00:14:42.883 "data_size": 65536 00:14:42.883 }, 00:14:42.883 { 00:14:42.883 "name": "BaseBdev3", 00:14:42.883 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:42.883 "is_configured": true, 00:14:42.883 "data_offset": 0, 00:14:42.883 "data_size": 65536 00:14:42.883 }, 00:14:42.883 { 00:14:42.883 "name": "BaseBdev4", 00:14:42.883 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:42.883 "is_configured": true, 00:14:42.883 "data_offset": 0, 00:14:42.883 "data_size": 65536 00:14:42.883 } 00:14:42.883 ] 00:14:42.883 }' 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.883 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=500 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.143 "name": "raid_bdev1", 00:14:43.143 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:43.143 "strip_size_kb": 64, 00:14:43.143 "state": "online", 00:14:43.143 "raid_level": "raid5f", 00:14:43.143 "superblock": false, 00:14:43.143 "num_base_bdevs": 4, 00:14:43.143 "num_base_bdevs_discovered": 4, 00:14:43.143 "num_base_bdevs_operational": 4, 00:14:43.143 "process": { 00:14:43.143 "type": "rebuild", 00:14:43.143 "target": "spare", 00:14:43.143 "progress": { 00:14:43.143 "blocks": 21120, 00:14:43.143 "percent": 10 00:14:43.143 } 00:14:43.143 }, 00:14:43.143 "base_bdevs_list": [ 00:14:43.143 { 00:14:43.143 "name": "spare", 00:14:43.143 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:43.143 "is_configured": true, 00:14:43.143 "data_offset": 0, 00:14:43.143 "data_size": 65536 00:14:43.143 }, 00:14:43.143 { 00:14:43.143 "name": "BaseBdev2", 00:14:43.143 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:43.143 "is_configured": true, 00:14:43.143 "data_offset": 0, 00:14:43.143 "data_size": 65536 00:14:43.143 }, 00:14:43.143 { 00:14:43.143 "name": "BaseBdev3", 00:14:43.143 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:43.143 "is_configured": true, 00:14:43.143 "data_offset": 0, 00:14:43.143 "data_size": 65536 00:14:43.143 }, 00:14:43.143 { 00:14:43.143 "name": "BaseBdev4", 00:14:43.143 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:43.143 "is_configured": true, 00:14:43.143 "data_offset": 0, 00:14:43.143 "data_size": 65536 00:14:43.143 } 00:14:43.143 ] 00:14:43.143 }' 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.143 15:30:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.084 15:30:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.344 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.344 "name": "raid_bdev1", 00:14:44.344 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:44.344 "strip_size_kb": 64, 00:14:44.344 "state": "online", 00:14:44.344 "raid_level": "raid5f", 00:14:44.344 "superblock": false, 00:14:44.344 "num_base_bdevs": 4, 00:14:44.344 "num_base_bdevs_discovered": 4, 00:14:44.344 "num_base_bdevs_operational": 4, 00:14:44.344 "process": { 00:14:44.344 "type": "rebuild", 00:14:44.344 "target": "spare", 00:14:44.344 "progress": { 00:14:44.344 "blocks": 42240, 00:14:44.344 "percent": 21 00:14:44.344 } 00:14:44.344 }, 00:14:44.344 "base_bdevs_list": [ 00:14:44.344 { 00:14:44.344 "name": "spare", 00:14:44.344 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:44.344 "is_configured": true, 00:14:44.344 "data_offset": 0, 00:14:44.344 "data_size": 65536 00:14:44.344 }, 00:14:44.344 { 00:14:44.344 "name": "BaseBdev2", 00:14:44.344 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:44.344 "is_configured": true, 00:14:44.344 "data_offset": 0, 00:14:44.344 "data_size": 65536 00:14:44.344 }, 00:14:44.344 { 00:14:44.344 "name": "BaseBdev3", 00:14:44.344 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:44.344 "is_configured": true, 00:14:44.344 "data_offset": 0, 00:14:44.344 "data_size": 65536 00:14:44.344 }, 00:14:44.344 { 00:14:44.344 "name": "BaseBdev4", 00:14:44.344 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:44.344 "is_configured": true, 00:14:44.344 "data_offset": 0, 00:14:44.344 "data_size": 65536 00:14:44.344 } 00:14:44.344 ] 00:14:44.344 }' 00:14:44.344 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.344 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.344 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.345 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.345 15:30:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.285 "name": "raid_bdev1", 00:14:45.285 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:45.285 "strip_size_kb": 64, 00:14:45.285 "state": "online", 00:14:45.285 "raid_level": "raid5f", 00:14:45.285 "superblock": false, 00:14:45.285 "num_base_bdevs": 4, 00:14:45.285 "num_base_bdevs_discovered": 4, 00:14:45.285 "num_base_bdevs_operational": 4, 00:14:45.285 "process": { 00:14:45.285 "type": "rebuild", 00:14:45.285 "target": "spare", 00:14:45.285 "progress": { 00:14:45.285 "blocks": 65280, 00:14:45.285 "percent": 33 00:14:45.285 } 00:14:45.285 }, 00:14:45.285 "base_bdevs_list": [ 00:14:45.285 { 00:14:45.285 "name": "spare", 00:14:45.285 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:45.285 "is_configured": true, 00:14:45.285 "data_offset": 0, 00:14:45.285 "data_size": 65536 00:14:45.285 }, 00:14:45.285 { 00:14:45.285 "name": "BaseBdev2", 00:14:45.285 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:45.285 "is_configured": true, 00:14:45.285 "data_offset": 0, 00:14:45.285 "data_size": 65536 00:14:45.285 }, 00:14:45.285 { 00:14:45.285 "name": "BaseBdev3", 00:14:45.285 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:45.285 "is_configured": true, 00:14:45.285 "data_offset": 0, 00:14:45.285 "data_size": 65536 00:14:45.285 }, 00:14:45.285 { 00:14:45.285 "name": "BaseBdev4", 00:14:45.285 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:45.285 "is_configured": true, 00:14:45.285 "data_offset": 0, 00:14:45.285 "data_size": 65536 00:14:45.285 } 00:14:45.285 ] 00:14:45.285 }' 00:14:45.285 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.544 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.544 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.544 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.544 15:30:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.484 "name": "raid_bdev1", 00:14:46.484 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:46.484 "strip_size_kb": 64, 00:14:46.484 "state": "online", 00:14:46.484 "raid_level": "raid5f", 00:14:46.484 "superblock": false, 00:14:46.484 "num_base_bdevs": 4, 00:14:46.484 "num_base_bdevs_discovered": 4, 00:14:46.484 "num_base_bdevs_operational": 4, 00:14:46.484 "process": { 00:14:46.484 "type": "rebuild", 00:14:46.484 "target": "spare", 00:14:46.484 "progress": { 00:14:46.484 "blocks": 86400, 00:14:46.484 "percent": 43 00:14:46.484 } 00:14:46.484 }, 00:14:46.484 "base_bdevs_list": [ 00:14:46.484 { 00:14:46.484 "name": "spare", 00:14:46.484 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:46.484 "is_configured": true, 00:14:46.484 "data_offset": 0, 00:14:46.484 "data_size": 65536 00:14:46.484 }, 00:14:46.484 { 00:14:46.484 "name": "BaseBdev2", 00:14:46.484 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:46.484 "is_configured": true, 00:14:46.484 "data_offset": 0, 00:14:46.484 "data_size": 65536 00:14:46.484 }, 00:14:46.484 { 00:14:46.484 "name": "BaseBdev3", 00:14:46.484 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:46.484 "is_configured": true, 00:14:46.484 "data_offset": 0, 00:14:46.484 "data_size": 65536 00:14:46.484 }, 00:14:46.484 { 00:14:46.484 "name": "BaseBdev4", 00:14:46.484 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:46.484 "is_configured": true, 00:14:46.484 "data_offset": 0, 00:14:46.484 "data_size": 65536 00:14:46.484 } 00:14:46.484 ] 00:14:46.484 }' 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.484 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.743 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.743 15:30:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.682 15:30:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.682 15:30:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.682 "name": "raid_bdev1", 00:14:47.682 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:47.682 "strip_size_kb": 64, 00:14:47.682 "state": "online", 00:14:47.682 "raid_level": "raid5f", 00:14:47.682 "superblock": false, 00:14:47.682 "num_base_bdevs": 4, 00:14:47.682 "num_base_bdevs_discovered": 4, 00:14:47.682 "num_base_bdevs_operational": 4, 00:14:47.682 "process": { 00:14:47.682 "type": "rebuild", 00:14:47.682 "target": "spare", 00:14:47.682 "progress": { 00:14:47.682 "blocks": 109440, 00:14:47.682 "percent": 55 00:14:47.682 } 00:14:47.682 }, 00:14:47.682 "base_bdevs_list": [ 00:14:47.682 { 00:14:47.682 "name": "spare", 00:14:47.682 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:47.682 "is_configured": true, 00:14:47.682 "data_offset": 0, 00:14:47.682 "data_size": 65536 00:14:47.682 }, 00:14:47.682 { 00:14:47.682 "name": "BaseBdev2", 00:14:47.682 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:47.682 "is_configured": true, 00:14:47.682 "data_offset": 0, 00:14:47.682 "data_size": 65536 00:14:47.682 }, 00:14:47.682 { 00:14:47.682 "name": "BaseBdev3", 00:14:47.682 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:47.682 "is_configured": true, 00:14:47.682 "data_offset": 0, 00:14:47.682 "data_size": 65536 00:14:47.682 }, 00:14:47.682 { 00:14:47.682 "name": "BaseBdev4", 00:14:47.682 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:47.682 "is_configured": true, 00:14:47.682 "data_offset": 0, 00:14:47.682 "data_size": 65536 00:14:47.682 } 00:14:47.682 ] 00:14:47.682 }' 00:14:47.682 15:30:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.682 15:30:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.682 15:30:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.682 15:30:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.682 15:30:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.063 "name": "raid_bdev1", 00:14:49.063 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:49.063 "strip_size_kb": 64, 00:14:49.063 "state": "online", 00:14:49.063 "raid_level": "raid5f", 00:14:49.063 "superblock": false, 00:14:49.063 "num_base_bdevs": 4, 00:14:49.063 "num_base_bdevs_discovered": 4, 00:14:49.063 "num_base_bdevs_operational": 4, 00:14:49.063 "process": { 00:14:49.063 "type": "rebuild", 00:14:49.063 "target": "spare", 00:14:49.063 "progress": { 00:14:49.063 "blocks": 130560, 00:14:49.063 "percent": 66 00:14:49.063 } 00:14:49.063 }, 00:14:49.063 "base_bdevs_list": [ 00:14:49.063 { 00:14:49.063 "name": "spare", 00:14:49.063 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev2", 00:14:49.063 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev3", 00:14:49.063 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev4", 00:14:49.063 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 } 00:14:49.063 ] 00:14:49.063 }' 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.063 15:30:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.003 "name": "raid_bdev1", 00:14:50.003 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:50.003 "strip_size_kb": 64, 00:14:50.003 "state": "online", 00:14:50.003 "raid_level": "raid5f", 00:14:50.003 "superblock": false, 00:14:50.003 "num_base_bdevs": 4, 00:14:50.003 "num_base_bdevs_discovered": 4, 00:14:50.003 "num_base_bdevs_operational": 4, 00:14:50.003 "process": { 00:14:50.003 "type": "rebuild", 00:14:50.003 "target": "spare", 00:14:50.003 "progress": { 00:14:50.003 "blocks": 151680, 00:14:50.003 "percent": 77 00:14:50.003 } 00:14:50.003 }, 00:14:50.003 "base_bdevs_list": [ 00:14:50.003 { 00:14:50.003 "name": "spare", 00:14:50.003 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:50.003 "is_configured": true, 00:14:50.003 "data_offset": 0, 00:14:50.003 "data_size": 65536 00:14:50.003 }, 00:14:50.003 { 00:14:50.003 "name": "BaseBdev2", 00:14:50.003 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:50.003 "is_configured": true, 00:14:50.003 "data_offset": 0, 00:14:50.003 "data_size": 65536 00:14:50.003 }, 00:14:50.003 { 00:14:50.003 "name": "BaseBdev3", 00:14:50.003 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:50.003 "is_configured": true, 00:14:50.003 "data_offset": 0, 00:14:50.003 "data_size": 65536 00:14:50.003 }, 00:14:50.003 { 00:14:50.003 "name": "BaseBdev4", 00:14:50.003 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:50.003 "is_configured": true, 00:14:50.003 "data_offset": 0, 00:14:50.003 "data_size": 65536 00:14:50.003 } 00:14:50.003 ] 00:14:50.003 }' 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.003 15:30:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.384 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.385 "name": "raid_bdev1", 00:14:51.385 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:51.385 "strip_size_kb": 64, 00:14:51.385 "state": "online", 00:14:51.385 "raid_level": "raid5f", 00:14:51.385 "superblock": false, 00:14:51.385 "num_base_bdevs": 4, 00:14:51.385 "num_base_bdevs_discovered": 4, 00:14:51.385 "num_base_bdevs_operational": 4, 00:14:51.385 "process": { 00:14:51.385 "type": "rebuild", 00:14:51.385 "target": "spare", 00:14:51.385 "progress": { 00:14:51.385 "blocks": 174720, 00:14:51.385 "percent": 88 00:14:51.385 } 00:14:51.385 }, 00:14:51.385 "base_bdevs_list": [ 00:14:51.385 { 00:14:51.385 "name": "spare", 00:14:51.385 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 0, 00:14:51.385 "data_size": 65536 00:14:51.385 }, 00:14:51.385 { 00:14:51.385 "name": "BaseBdev2", 00:14:51.385 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 0, 00:14:51.385 "data_size": 65536 00:14:51.385 }, 00:14:51.385 { 00:14:51.385 "name": "BaseBdev3", 00:14:51.385 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 0, 00:14:51.385 "data_size": 65536 00:14:51.385 }, 00:14:51.385 { 00:14:51.385 "name": "BaseBdev4", 00:14:51.385 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 0, 00:14:51.385 "data_size": 65536 00:14:51.385 } 00:14:51.385 ] 00:14:51.385 }' 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.385 15:30:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.324 [2024-11-26 15:30:50.604544] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:52.324 [2024-11-26 15:30:50.604654] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:52.324 [2024-11-26 15:30:50.604703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.324 "name": "raid_bdev1", 00:14:52.324 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:52.324 "strip_size_kb": 64, 00:14:52.324 "state": "online", 00:14:52.324 "raid_level": "raid5f", 00:14:52.324 "superblock": false, 00:14:52.324 "num_base_bdevs": 4, 00:14:52.324 "num_base_bdevs_discovered": 4, 00:14:52.324 "num_base_bdevs_operational": 4, 00:14:52.324 "process": { 00:14:52.324 "type": "rebuild", 00:14:52.324 "target": "spare", 00:14:52.324 "progress": { 00:14:52.324 "blocks": 195840, 00:14:52.324 "percent": 99 00:14:52.324 } 00:14:52.324 }, 00:14:52.324 "base_bdevs_list": [ 00:14:52.324 { 00:14:52.324 "name": "spare", 00:14:52.324 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:52.324 "is_configured": true, 00:14:52.324 "data_offset": 0, 00:14:52.324 "data_size": 65536 00:14:52.324 }, 00:14:52.324 { 00:14:52.324 "name": "BaseBdev2", 00:14:52.324 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:52.324 "is_configured": true, 00:14:52.324 "data_offset": 0, 00:14:52.324 "data_size": 65536 00:14:52.324 }, 00:14:52.324 { 00:14:52.324 "name": "BaseBdev3", 00:14:52.324 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:52.324 "is_configured": true, 00:14:52.324 "data_offset": 0, 00:14:52.324 "data_size": 65536 00:14:52.324 }, 00:14:52.324 { 00:14:52.324 "name": "BaseBdev4", 00:14:52.324 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:52.324 "is_configured": true, 00:14:52.324 "data_offset": 0, 00:14:52.324 "data_size": 65536 00:14:52.324 } 00:14:52.324 ] 00:14:52.324 }' 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.324 15:30:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.263 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.523 "name": "raid_bdev1", 00:14:53.523 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:53.523 "strip_size_kb": 64, 00:14:53.523 "state": "online", 00:14:53.523 "raid_level": "raid5f", 00:14:53.523 "superblock": false, 00:14:53.523 "num_base_bdevs": 4, 00:14:53.523 "num_base_bdevs_discovered": 4, 00:14:53.523 "num_base_bdevs_operational": 4, 00:14:53.523 "base_bdevs_list": [ 00:14:53.523 { 00:14:53.523 "name": "spare", 00:14:53.523 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev2", 00:14:53.523 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev3", 00:14:53.523 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev4", 00:14:53.523 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 } 00:14:53.523 ] 00:14:53.523 }' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.523 "name": "raid_bdev1", 00:14:53.523 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:53.523 "strip_size_kb": 64, 00:14:53.523 "state": "online", 00:14:53.523 "raid_level": "raid5f", 00:14:53.523 "superblock": false, 00:14:53.523 "num_base_bdevs": 4, 00:14:53.523 "num_base_bdevs_discovered": 4, 00:14:53.523 "num_base_bdevs_operational": 4, 00:14:53.523 "base_bdevs_list": [ 00:14:53.523 { 00:14:53.523 "name": "spare", 00:14:53.523 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev2", 00:14:53.523 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev3", 00:14:53.523 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 }, 00:14:53.523 { 00:14:53.523 "name": "BaseBdev4", 00:14:53.523 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:53.523 "is_configured": true, 00:14:53.523 "data_offset": 0, 00:14:53.523 "data_size": 65536 00:14:53.523 } 00:14:53.523 ] 00:14:53.523 }' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.523 15:30:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.783 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.783 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.783 "name": "raid_bdev1", 00:14:53.783 "uuid": "80af4189-75d4-443a-aea7-ea5dfe2114e0", 00:14:53.783 "strip_size_kb": 64, 00:14:53.783 "state": "online", 00:14:53.783 "raid_level": "raid5f", 00:14:53.783 "superblock": false, 00:14:53.783 "num_base_bdevs": 4, 00:14:53.783 "num_base_bdevs_discovered": 4, 00:14:53.783 "num_base_bdevs_operational": 4, 00:14:53.783 "base_bdevs_list": [ 00:14:53.783 { 00:14:53.783 "name": "spare", 00:14:53.783 "uuid": "d92d0050-45ea-5832-b888-c4d4323843c1", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev2", 00:14:53.783 "uuid": "b11b39f6-729f-5adf-975a-66bfe9ed9a17", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev3", 00:14:53.783 "uuid": "cf5a2c18-4967-5873-bf8f-6cda47b56716", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev4", 00:14:53.783 "uuid": "aaa63f99-dee4-5e57-a68a-4a7537bbd8eb", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 } 00:14:53.783 ] 00:14:53.783 }' 00:14:53.783 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.783 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.044 [2024-11-26 15:30:52.446588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.044 [2024-11-26 15:30:52.446623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.044 [2024-11-26 15:30:52.446719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.044 [2024-11-26 15:30:52.446818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.044 [2024-11-26 15:30:52.446827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:54.044 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:54.304 /dev/nbd0 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.304 1+0 records in 00:14:54.304 1+0 records out 00:14:54.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435791 s, 9.4 MB/s 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:54.304 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:54.564 /dev/nbd1 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.564 1+0 records in 00:14:54.564 1+0 records out 00:14:54.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368983 s, 11.1 MB/s 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:54.564 15:30:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.564 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.564 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:54.564 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.564 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:54.564 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.824 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 96521 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 96521 ']' 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 96521 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96521 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.083 killing process with pid 96521 00:14:55.083 Received shutdown signal, test time was about 60.000000 seconds 00:14:55.083 00:14:55.083 Latency(us) 00:14:55.083 [2024-11-26T15:30:53.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.083 [2024-11-26T15:30:53.562Z] =================================================================================================================== 00:14:55.083 [2024-11-26T15:30:53.562Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96521' 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 96521 00:14:55.083 [2024-11-26 15:30:53.547095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.083 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 96521 00:14:55.343 [2024-11-26 15:30:53.639565] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.606 ************************************ 00:14:55.606 END TEST raid5f_rebuild_test 00:14:55.606 ************************************ 00:14:55.606 15:30:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:55.606 00:14:55.606 real 0m18.560s 00:14:55.606 user 0m22.305s 00:14:55.606 sys 0m2.381s 00:14:55.606 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.606 15:30:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.606 15:30:54 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:55.606 15:30:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:55.606 15:30:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.606 15:30:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.606 ************************************ 00:14:55.606 START TEST raid5f_rebuild_test_sb 00:14:55.606 ************************************ 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=97026 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 97026 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 97026 ']' 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.606 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.866 [2024-11-26 15:30:54.140349] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:55.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:55.866 Zero copy mechanism will not be used. 00:14:55.866 [2024-11-26 15:30:54.140545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97026 ] 00:14:55.866 [2024-11-26 15:30:54.280747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:55.866 [2024-11-26 15:30:54.318427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.126 [2024-11-26 15:30:54.360847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.126 [2024-11-26 15:30:54.437173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.126 [2024-11-26 15:30:54.437302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 BaseBdev1_malloc 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 [2024-11-26 15:30:54.968968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:56.696 [2024-11-26 15:30:54.969052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.696 [2024-11-26 15:30:54.969087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:56.696 [2024-11-26 15:30:54.969102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.696 [2024-11-26 15:30:54.971589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.696 [2024-11-26 15:30:54.971626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.696 BaseBdev1 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 BaseBdev2_malloc 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 [2024-11-26 15:30:55.003617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:56.696 [2024-11-26 15:30:55.003673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.696 [2024-11-26 15:30:55.003694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:56.696 [2024-11-26 15:30:55.003705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.696 [2024-11-26 15:30:55.006101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.696 [2024-11-26 15:30:55.006140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:56.696 BaseBdev2 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 BaseBdev3_malloc 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 [2024-11-26 15:30:55.038217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:56.696 [2024-11-26 15:30:55.038267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.696 [2024-11-26 15:30:55.038289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:56.696 [2024-11-26 15:30:55.038300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.696 [2024-11-26 15:30:55.040640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.696 [2024-11-26 15:30:55.040678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:56.696 BaseBdev3 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 BaseBdev4_malloc 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.696 [2024-11-26 15:30:55.090422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:56.696 [2024-11-26 15:30:55.090506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.696 [2024-11-26 15:30:55.090542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:56.696 [2024-11-26 15:30:55.090562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.696 [2024-11-26 15:30:55.094412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.696 [2024-11-26 15:30:55.094469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:56.696 BaseBdev4 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:56.696 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.697 spare_malloc 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.697 spare_delay 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.697 [2024-11-26 15:30:55.138577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:56.697 [2024-11-26 15:30:55.138633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.697 [2024-11-26 15:30:55.138658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:56.697 [2024-11-26 15:30:55.138671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.697 [2024-11-26 15:30:55.140998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.697 [2024-11-26 15:30:55.141037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:56.697 spare 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.697 [2024-11-26 15:30:55.150676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.697 [2024-11-26 15:30:55.152755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.697 [2024-11-26 15:30:55.152815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.697 [2024-11-26 15:30:55.152857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:56.697 [2024-11-26 15:30:55.153030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:56.697 [2024-11-26 15:30:55.153046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:56.697 [2024-11-26 15:30:55.153313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:56.697 [2024-11-26 15:30:55.153800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:56.697 [2024-11-26 15:30:55.153811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:56.697 [2024-11-26 15:30:55.153948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.697 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.957 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.957 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.957 "name": "raid_bdev1", 00:14:56.957 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:14:56.957 "strip_size_kb": 64, 00:14:56.957 "state": "online", 00:14:56.957 "raid_level": "raid5f", 00:14:56.957 "superblock": true, 00:14:56.957 "num_base_bdevs": 4, 00:14:56.957 "num_base_bdevs_discovered": 4, 00:14:56.957 "num_base_bdevs_operational": 4, 00:14:56.957 "base_bdevs_list": [ 00:14:56.957 { 00:14:56.957 "name": "BaseBdev1", 00:14:56.957 "uuid": "e2cf7791-eb5b-583d-aed9-ba5bed7ea847", 00:14:56.957 "is_configured": true, 00:14:56.957 "data_offset": 2048, 00:14:56.957 "data_size": 63488 00:14:56.957 }, 00:14:56.957 { 00:14:56.957 "name": "BaseBdev2", 00:14:56.957 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:14:56.957 "is_configured": true, 00:14:56.957 "data_offset": 2048, 00:14:56.957 "data_size": 63488 00:14:56.957 }, 00:14:56.957 { 00:14:56.957 "name": "BaseBdev3", 00:14:56.957 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:14:56.957 "is_configured": true, 00:14:56.957 "data_offset": 2048, 00:14:56.957 "data_size": 63488 00:14:56.957 }, 00:14:56.957 { 00:14:56.957 "name": "BaseBdev4", 00:14:56.957 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:14:56.957 "is_configured": true, 00:14:56.957 "data_offset": 2048, 00:14:56.957 "data_size": 63488 00:14:56.957 } 00:14:56.957 ] 00:14:56.957 }' 00:14:56.957 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.957 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.218 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.218 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.218 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.218 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:57.218 [2024-11-26 15:30:55.645221] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.218 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.218 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.479 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:57.479 [2024-11-26 15:30:55.913136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:57.479 /dev/nbd0 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.740 1+0 records in 00:14:57.740 1+0 records out 00:14:57.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589746 s, 6.9 MB/s 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.740 15:30:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.740 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:57.740 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:57.740 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:57.740 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:58.311 496+0 records in 00:14:58.311 496+0 records out 00:14:58.311 97517568 bytes (98 MB, 93 MiB) copied, 0.59718 s, 163 MB/s 00:14:58.311 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:58.311 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.311 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:58.311 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.311 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:58.311 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.311 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:58.572 [2024-11-26 15:30:56.811423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.572 [2024-11-26 15:30:56.839524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.572 "name": "raid_bdev1", 00:14:58.572 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:14:58.572 "strip_size_kb": 64, 00:14:58.572 "state": "online", 00:14:58.572 "raid_level": "raid5f", 00:14:58.572 "superblock": true, 00:14:58.572 "num_base_bdevs": 4, 00:14:58.572 "num_base_bdevs_discovered": 3, 00:14:58.572 "num_base_bdevs_operational": 3, 00:14:58.572 "base_bdevs_list": [ 00:14:58.572 { 00:14:58.572 "name": null, 00:14:58.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.572 "is_configured": false, 00:14:58.572 "data_offset": 0, 00:14:58.572 "data_size": 63488 00:14:58.572 }, 00:14:58.572 { 00:14:58.572 "name": "BaseBdev2", 00:14:58.572 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:14:58.572 "is_configured": true, 00:14:58.572 "data_offset": 2048, 00:14:58.572 "data_size": 63488 00:14:58.572 }, 00:14:58.572 { 00:14:58.572 "name": "BaseBdev3", 00:14:58.572 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:14:58.572 "is_configured": true, 00:14:58.572 "data_offset": 2048, 00:14:58.572 "data_size": 63488 00:14:58.572 }, 00:14:58.572 { 00:14:58.572 "name": "BaseBdev4", 00:14:58.572 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:14:58.572 "is_configured": true, 00:14:58.572 "data_offset": 2048, 00:14:58.572 "data_size": 63488 00:14:58.572 } 00:14:58.572 ] 00:14:58.572 }' 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.572 15:30:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.141 15:30:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.141 15:30:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.141 15:30:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.141 [2024-11-26 15:30:57.319643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.141 [2024-11-26 15:30:57.326822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:14:59.141 15:30:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.141 15:30:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:59.141 [2024-11-26 15:30:57.329302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.081 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.081 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.081 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.082 "name": "raid_bdev1", 00:15:00.082 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:00.082 "strip_size_kb": 64, 00:15:00.082 "state": "online", 00:15:00.082 "raid_level": "raid5f", 00:15:00.082 "superblock": true, 00:15:00.082 "num_base_bdevs": 4, 00:15:00.082 "num_base_bdevs_discovered": 4, 00:15:00.082 "num_base_bdevs_operational": 4, 00:15:00.082 "process": { 00:15:00.082 "type": "rebuild", 00:15:00.082 "target": "spare", 00:15:00.082 "progress": { 00:15:00.082 "blocks": 19200, 00:15:00.082 "percent": 10 00:15:00.082 } 00:15:00.082 }, 00:15:00.082 "base_bdevs_list": [ 00:15:00.082 { 00:15:00.082 "name": "spare", 00:15:00.082 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:00.082 "is_configured": true, 00:15:00.082 "data_offset": 2048, 00:15:00.082 "data_size": 63488 00:15:00.082 }, 00:15:00.082 { 00:15:00.082 "name": "BaseBdev2", 00:15:00.082 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:00.082 "is_configured": true, 00:15:00.082 "data_offset": 2048, 00:15:00.082 "data_size": 63488 00:15:00.082 }, 00:15:00.082 { 00:15:00.082 "name": "BaseBdev3", 00:15:00.082 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:00.082 "is_configured": true, 00:15:00.082 "data_offset": 2048, 00:15:00.082 "data_size": 63488 00:15:00.082 }, 00:15:00.082 { 00:15:00.082 "name": "BaseBdev4", 00:15:00.082 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:00.082 "is_configured": true, 00:15:00.082 "data_offset": 2048, 00:15:00.082 "data_size": 63488 00:15:00.082 } 00:15:00.082 ] 00:15:00.082 }' 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.082 [2024-11-26 15:30:58.479198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.082 [2024-11-26 15:30:58.537820] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.082 [2024-11-26 15:30:58.537889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.082 [2024-11-26 15:30:58.537905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.082 [2024-11-26 15:30:58.537918] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.082 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.342 "name": "raid_bdev1", 00:15:00.342 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:00.342 "strip_size_kb": 64, 00:15:00.342 "state": "online", 00:15:00.342 "raid_level": "raid5f", 00:15:00.342 "superblock": true, 00:15:00.342 "num_base_bdevs": 4, 00:15:00.342 "num_base_bdevs_discovered": 3, 00:15:00.342 "num_base_bdevs_operational": 3, 00:15:00.342 "base_bdevs_list": [ 00:15:00.342 { 00:15:00.342 "name": null, 00:15:00.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.342 "is_configured": false, 00:15:00.342 "data_offset": 0, 00:15:00.342 "data_size": 63488 00:15:00.342 }, 00:15:00.342 { 00:15:00.342 "name": "BaseBdev2", 00:15:00.342 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:00.342 "is_configured": true, 00:15:00.342 "data_offset": 2048, 00:15:00.342 "data_size": 63488 00:15:00.342 }, 00:15:00.342 { 00:15:00.342 "name": "BaseBdev3", 00:15:00.342 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:00.342 "is_configured": true, 00:15:00.342 "data_offset": 2048, 00:15:00.342 "data_size": 63488 00:15:00.342 }, 00:15:00.342 { 00:15:00.342 "name": "BaseBdev4", 00:15:00.342 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:00.342 "is_configured": true, 00:15:00.342 "data_offset": 2048, 00:15:00.342 "data_size": 63488 00:15:00.342 } 00:15:00.342 ] 00:15:00.342 }' 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.342 15:30:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.602 "name": "raid_bdev1", 00:15:00.602 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:00.602 "strip_size_kb": 64, 00:15:00.602 "state": "online", 00:15:00.602 "raid_level": "raid5f", 00:15:00.602 "superblock": true, 00:15:00.602 "num_base_bdevs": 4, 00:15:00.602 "num_base_bdevs_discovered": 3, 00:15:00.602 "num_base_bdevs_operational": 3, 00:15:00.602 "base_bdevs_list": [ 00:15:00.602 { 00:15:00.602 "name": null, 00:15:00.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.602 "is_configured": false, 00:15:00.602 "data_offset": 0, 00:15:00.602 "data_size": 63488 00:15:00.602 }, 00:15:00.602 { 00:15:00.602 "name": "BaseBdev2", 00:15:00.602 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:00.602 "is_configured": true, 00:15:00.602 "data_offset": 2048, 00:15:00.602 "data_size": 63488 00:15:00.602 }, 00:15:00.602 { 00:15:00.602 "name": "BaseBdev3", 00:15:00.602 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:00.602 "is_configured": true, 00:15:00.602 "data_offset": 2048, 00:15:00.602 "data_size": 63488 00:15:00.602 }, 00:15:00.602 { 00:15:00.602 "name": "BaseBdev4", 00:15:00.602 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:00.602 "is_configured": true, 00:15:00.602 "data_offset": 2048, 00:15:00.602 "data_size": 63488 00:15:00.602 } 00:15:00.602 ] 00:15:00.602 }' 00:15:00.602 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.862 [2024-11-26 15:30:59.175410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.862 [2024-11-26 15:30:59.181512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.862 15:30:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:00.862 [2024-11-26 15:30:59.183977] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.801 "name": "raid_bdev1", 00:15:01.801 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:01.801 "strip_size_kb": 64, 00:15:01.801 "state": "online", 00:15:01.801 "raid_level": "raid5f", 00:15:01.801 "superblock": true, 00:15:01.801 "num_base_bdevs": 4, 00:15:01.801 "num_base_bdevs_discovered": 4, 00:15:01.801 "num_base_bdevs_operational": 4, 00:15:01.801 "process": { 00:15:01.801 "type": "rebuild", 00:15:01.801 "target": "spare", 00:15:01.801 "progress": { 00:15:01.801 "blocks": 19200, 00:15:01.801 "percent": 10 00:15:01.801 } 00:15:01.801 }, 00:15:01.801 "base_bdevs_list": [ 00:15:01.801 { 00:15:01.801 "name": "spare", 00:15:01.801 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:01.801 "is_configured": true, 00:15:01.801 "data_offset": 2048, 00:15:01.801 "data_size": 63488 00:15:01.801 }, 00:15:01.801 { 00:15:01.801 "name": "BaseBdev2", 00:15:01.801 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:01.801 "is_configured": true, 00:15:01.801 "data_offset": 2048, 00:15:01.801 "data_size": 63488 00:15:01.801 }, 00:15:01.801 { 00:15:01.801 "name": "BaseBdev3", 00:15:01.801 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:01.801 "is_configured": true, 00:15:01.801 "data_offset": 2048, 00:15:01.801 "data_size": 63488 00:15:01.801 }, 00:15:01.801 { 00:15:01.801 "name": "BaseBdev4", 00:15:01.801 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:01.801 "is_configured": true, 00:15:01.801 "data_offset": 2048, 00:15:01.801 "data_size": 63488 00:15:01.801 } 00:15:01.801 ] 00:15:01.801 }' 00:15:01.801 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:02.061 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=519 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.061 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.061 "name": "raid_bdev1", 00:15:02.061 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:02.061 "strip_size_kb": 64, 00:15:02.061 "state": "online", 00:15:02.061 "raid_level": "raid5f", 00:15:02.062 "superblock": true, 00:15:02.062 "num_base_bdevs": 4, 00:15:02.062 "num_base_bdevs_discovered": 4, 00:15:02.062 "num_base_bdevs_operational": 4, 00:15:02.062 "process": { 00:15:02.062 "type": "rebuild", 00:15:02.062 "target": "spare", 00:15:02.062 "progress": { 00:15:02.062 "blocks": 21120, 00:15:02.062 "percent": 11 00:15:02.062 } 00:15:02.062 }, 00:15:02.062 "base_bdevs_list": [ 00:15:02.062 { 00:15:02.062 "name": "spare", 00:15:02.062 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:02.062 "is_configured": true, 00:15:02.062 "data_offset": 2048, 00:15:02.062 "data_size": 63488 00:15:02.062 }, 00:15:02.062 { 00:15:02.062 "name": "BaseBdev2", 00:15:02.062 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:02.062 "is_configured": true, 00:15:02.062 "data_offset": 2048, 00:15:02.062 "data_size": 63488 00:15:02.062 }, 00:15:02.062 { 00:15:02.062 "name": "BaseBdev3", 00:15:02.062 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:02.062 "is_configured": true, 00:15:02.062 "data_offset": 2048, 00:15:02.062 "data_size": 63488 00:15:02.062 }, 00:15:02.062 { 00:15:02.062 "name": "BaseBdev4", 00:15:02.062 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:02.062 "is_configured": true, 00:15:02.062 "data_offset": 2048, 00:15:02.062 "data_size": 63488 00:15:02.062 } 00:15:02.062 ] 00:15:02.062 }' 00:15:02.062 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.062 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.062 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.062 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.062 15:31:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.443 "name": "raid_bdev1", 00:15:03.443 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:03.443 "strip_size_kb": 64, 00:15:03.443 "state": "online", 00:15:03.443 "raid_level": "raid5f", 00:15:03.443 "superblock": true, 00:15:03.443 "num_base_bdevs": 4, 00:15:03.443 "num_base_bdevs_discovered": 4, 00:15:03.443 "num_base_bdevs_operational": 4, 00:15:03.443 "process": { 00:15:03.443 "type": "rebuild", 00:15:03.443 "target": "spare", 00:15:03.443 "progress": { 00:15:03.443 "blocks": 44160, 00:15:03.443 "percent": 23 00:15:03.443 } 00:15:03.443 }, 00:15:03.443 "base_bdevs_list": [ 00:15:03.443 { 00:15:03.443 "name": "spare", 00:15:03.443 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:03.443 "is_configured": true, 00:15:03.443 "data_offset": 2048, 00:15:03.443 "data_size": 63488 00:15:03.443 }, 00:15:03.443 { 00:15:03.443 "name": "BaseBdev2", 00:15:03.443 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:03.443 "is_configured": true, 00:15:03.443 "data_offset": 2048, 00:15:03.443 "data_size": 63488 00:15:03.443 }, 00:15:03.443 { 00:15:03.443 "name": "BaseBdev3", 00:15:03.443 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:03.443 "is_configured": true, 00:15:03.443 "data_offset": 2048, 00:15:03.443 "data_size": 63488 00:15:03.443 }, 00:15:03.443 { 00:15:03.443 "name": "BaseBdev4", 00:15:03.443 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:03.443 "is_configured": true, 00:15:03.443 "data_offset": 2048, 00:15:03.443 "data_size": 63488 00:15:03.443 } 00:15:03.443 ] 00:15:03.443 }' 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.443 15:31:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.381 "name": "raid_bdev1", 00:15:04.381 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:04.381 "strip_size_kb": 64, 00:15:04.381 "state": "online", 00:15:04.381 "raid_level": "raid5f", 00:15:04.381 "superblock": true, 00:15:04.381 "num_base_bdevs": 4, 00:15:04.381 "num_base_bdevs_discovered": 4, 00:15:04.381 "num_base_bdevs_operational": 4, 00:15:04.381 "process": { 00:15:04.381 "type": "rebuild", 00:15:04.381 "target": "spare", 00:15:04.381 "progress": { 00:15:04.381 "blocks": 65280, 00:15:04.381 "percent": 34 00:15:04.381 } 00:15:04.381 }, 00:15:04.381 "base_bdevs_list": [ 00:15:04.381 { 00:15:04.381 "name": "spare", 00:15:04.381 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:04.381 "is_configured": true, 00:15:04.381 "data_offset": 2048, 00:15:04.381 "data_size": 63488 00:15:04.381 }, 00:15:04.381 { 00:15:04.381 "name": "BaseBdev2", 00:15:04.381 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:04.381 "is_configured": true, 00:15:04.381 "data_offset": 2048, 00:15:04.381 "data_size": 63488 00:15:04.381 }, 00:15:04.381 { 00:15:04.381 "name": "BaseBdev3", 00:15:04.381 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:04.381 "is_configured": true, 00:15:04.381 "data_offset": 2048, 00:15:04.381 "data_size": 63488 00:15:04.381 }, 00:15:04.381 { 00:15:04.381 "name": "BaseBdev4", 00:15:04.381 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:04.381 "is_configured": true, 00:15:04.381 "data_offset": 2048, 00:15:04.381 "data_size": 63488 00:15:04.381 } 00:15:04.381 ] 00:15:04.381 }' 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.381 15:31:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.320 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.320 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.320 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.320 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.320 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.320 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.579 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.579 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.579 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.579 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.579 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.579 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.579 "name": "raid_bdev1", 00:15:05.579 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:05.579 "strip_size_kb": 64, 00:15:05.579 "state": "online", 00:15:05.579 "raid_level": "raid5f", 00:15:05.579 "superblock": true, 00:15:05.579 "num_base_bdevs": 4, 00:15:05.579 "num_base_bdevs_discovered": 4, 00:15:05.579 "num_base_bdevs_operational": 4, 00:15:05.579 "process": { 00:15:05.579 "type": "rebuild", 00:15:05.579 "target": "spare", 00:15:05.579 "progress": { 00:15:05.579 "blocks": 86400, 00:15:05.579 "percent": 45 00:15:05.579 } 00:15:05.579 }, 00:15:05.579 "base_bdevs_list": [ 00:15:05.579 { 00:15:05.579 "name": "spare", 00:15:05.579 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:05.579 "is_configured": true, 00:15:05.579 "data_offset": 2048, 00:15:05.579 "data_size": 63488 00:15:05.579 }, 00:15:05.579 { 00:15:05.579 "name": "BaseBdev2", 00:15:05.579 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:05.579 "is_configured": true, 00:15:05.579 "data_offset": 2048, 00:15:05.579 "data_size": 63488 00:15:05.579 }, 00:15:05.579 { 00:15:05.579 "name": "BaseBdev3", 00:15:05.579 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:05.579 "is_configured": true, 00:15:05.579 "data_offset": 2048, 00:15:05.579 "data_size": 63488 00:15:05.579 }, 00:15:05.579 { 00:15:05.579 "name": "BaseBdev4", 00:15:05.579 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:05.579 "is_configured": true, 00:15:05.579 "data_offset": 2048, 00:15:05.579 "data_size": 63488 00:15:05.579 } 00:15:05.579 ] 00:15:05.579 }' 00:15:05.579 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.580 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.580 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.580 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.580 15:31:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.517 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.777 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.777 "name": "raid_bdev1", 00:15:06.777 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:06.777 "strip_size_kb": 64, 00:15:06.777 "state": "online", 00:15:06.777 "raid_level": "raid5f", 00:15:06.777 "superblock": true, 00:15:06.777 "num_base_bdevs": 4, 00:15:06.777 "num_base_bdevs_discovered": 4, 00:15:06.777 "num_base_bdevs_operational": 4, 00:15:06.777 "process": { 00:15:06.777 "type": "rebuild", 00:15:06.777 "target": "spare", 00:15:06.777 "progress": { 00:15:06.777 "blocks": 109440, 00:15:06.777 "percent": 57 00:15:06.777 } 00:15:06.777 }, 00:15:06.777 "base_bdevs_list": [ 00:15:06.777 { 00:15:06.777 "name": "spare", 00:15:06.777 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:06.777 "is_configured": true, 00:15:06.777 "data_offset": 2048, 00:15:06.777 "data_size": 63488 00:15:06.777 }, 00:15:06.777 { 00:15:06.777 "name": "BaseBdev2", 00:15:06.777 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:06.777 "is_configured": true, 00:15:06.777 "data_offset": 2048, 00:15:06.777 "data_size": 63488 00:15:06.777 }, 00:15:06.777 { 00:15:06.777 "name": "BaseBdev3", 00:15:06.777 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:06.777 "is_configured": true, 00:15:06.777 "data_offset": 2048, 00:15:06.777 "data_size": 63488 00:15:06.777 }, 00:15:06.777 { 00:15:06.777 "name": "BaseBdev4", 00:15:06.777 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:06.777 "is_configured": true, 00:15:06.777 "data_offset": 2048, 00:15:06.777 "data_size": 63488 00:15:06.777 } 00:15:06.777 ] 00:15:06.777 }' 00:15:06.777 15:31:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.777 15:31:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.777 15:31:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.777 15:31:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.777 15:31:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.716 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.716 "name": "raid_bdev1", 00:15:07.716 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:07.716 "strip_size_kb": 64, 00:15:07.716 "state": "online", 00:15:07.716 "raid_level": "raid5f", 00:15:07.716 "superblock": true, 00:15:07.716 "num_base_bdevs": 4, 00:15:07.716 "num_base_bdevs_discovered": 4, 00:15:07.716 "num_base_bdevs_operational": 4, 00:15:07.716 "process": { 00:15:07.716 "type": "rebuild", 00:15:07.716 "target": "spare", 00:15:07.716 "progress": { 00:15:07.716 "blocks": 130560, 00:15:07.716 "percent": 68 00:15:07.716 } 00:15:07.717 }, 00:15:07.717 "base_bdevs_list": [ 00:15:07.717 { 00:15:07.717 "name": "spare", 00:15:07.717 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:07.717 "is_configured": true, 00:15:07.717 "data_offset": 2048, 00:15:07.717 "data_size": 63488 00:15:07.717 }, 00:15:07.717 { 00:15:07.717 "name": "BaseBdev2", 00:15:07.717 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:07.717 "is_configured": true, 00:15:07.717 "data_offset": 2048, 00:15:07.717 "data_size": 63488 00:15:07.717 }, 00:15:07.717 { 00:15:07.717 "name": "BaseBdev3", 00:15:07.717 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:07.717 "is_configured": true, 00:15:07.717 "data_offset": 2048, 00:15:07.717 "data_size": 63488 00:15:07.717 }, 00:15:07.717 { 00:15:07.717 "name": "BaseBdev4", 00:15:07.717 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:07.717 "is_configured": true, 00:15:07.717 "data_offset": 2048, 00:15:07.717 "data_size": 63488 00:15:07.717 } 00:15:07.717 ] 00:15:07.717 }' 00:15:07.717 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.976 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.976 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.976 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.976 15:31:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.915 "name": "raid_bdev1", 00:15:08.915 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:08.915 "strip_size_kb": 64, 00:15:08.915 "state": "online", 00:15:08.915 "raid_level": "raid5f", 00:15:08.915 "superblock": true, 00:15:08.915 "num_base_bdevs": 4, 00:15:08.915 "num_base_bdevs_discovered": 4, 00:15:08.915 "num_base_bdevs_operational": 4, 00:15:08.915 "process": { 00:15:08.915 "type": "rebuild", 00:15:08.915 "target": "spare", 00:15:08.915 "progress": { 00:15:08.915 "blocks": 153600, 00:15:08.915 "percent": 80 00:15:08.915 } 00:15:08.915 }, 00:15:08.915 "base_bdevs_list": [ 00:15:08.915 { 00:15:08.915 "name": "spare", 00:15:08.915 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:08.915 "is_configured": true, 00:15:08.915 "data_offset": 2048, 00:15:08.915 "data_size": 63488 00:15:08.915 }, 00:15:08.915 { 00:15:08.915 "name": "BaseBdev2", 00:15:08.915 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:08.915 "is_configured": true, 00:15:08.915 "data_offset": 2048, 00:15:08.915 "data_size": 63488 00:15:08.915 }, 00:15:08.915 { 00:15:08.915 "name": "BaseBdev3", 00:15:08.915 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:08.915 "is_configured": true, 00:15:08.915 "data_offset": 2048, 00:15:08.915 "data_size": 63488 00:15:08.915 }, 00:15:08.915 { 00:15:08.915 "name": "BaseBdev4", 00:15:08.915 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:08.915 "is_configured": true, 00:15:08.915 "data_offset": 2048, 00:15:08.915 "data_size": 63488 00:15:08.915 } 00:15:08.915 ] 00:15:08.915 }' 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.915 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.175 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.175 15:31:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.113 "name": "raid_bdev1", 00:15:10.113 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:10.113 "strip_size_kb": 64, 00:15:10.113 "state": "online", 00:15:10.113 "raid_level": "raid5f", 00:15:10.113 "superblock": true, 00:15:10.113 "num_base_bdevs": 4, 00:15:10.113 "num_base_bdevs_discovered": 4, 00:15:10.113 "num_base_bdevs_operational": 4, 00:15:10.113 "process": { 00:15:10.113 "type": "rebuild", 00:15:10.113 "target": "spare", 00:15:10.113 "progress": { 00:15:10.113 "blocks": 174720, 00:15:10.113 "percent": 91 00:15:10.113 } 00:15:10.113 }, 00:15:10.113 "base_bdevs_list": [ 00:15:10.113 { 00:15:10.113 "name": "spare", 00:15:10.113 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:10.113 "is_configured": true, 00:15:10.113 "data_offset": 2048, 00:15:10.113 "data_size": 63488 00:15:10.113 }, 00:15:10.113 { 00:15:10.113 "name": "BaseBdev2", 00:15:10.113 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:10.113 "is_configured": true, 00:15:10.113 "data_offset": 2048, 00:15:10.113 "data_size": 63488 00:15:10.113 }, 00:15:10.113 { 00:15:10.113 "name": "BaseBdev3", 00:15:10.113 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:10.113 "is_configured": true, 00:15:10.113 "data_offset": 2048, 00:15:10.113 "data_size": 63488 00:15:10.113 }, 00:15:10.113 { 00:15:10.113 "name": "BaseBdev4", 00:15:10.113 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:10.113 "is_configured": true, 00:15:10.113 "data_offset": 2048, 00:15:10.113 "data_size": 63488 00:15:10.113 } 00:15:10.113 ] 00:15:10.113 }' 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.113 15:31:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.050 [2024-11-26 15:31:09.248680] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:11.051 [2024-11-26 15:31:09.248761] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:11.051 [2024-11-26 15:31:09.248881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.310 "name": "raid_bdev1", 00:15:11.310 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:11.310 "strip_size_kb": 64, 00:15:11.310 "state": "online", 00:15:11.310 "raid_level": "raid5f", 00:15:11.310 "superblock": true, 00:15:11.310 "num_base_bdevs": 4, 00:15:11.310 "num_base_bdevs_discovered": 4, 00:15:11.310 "num_base_bdevs_operational": 4, 00:15:11.310 "base_bdevs_list": [ 00:15:11.310 { 00:15:11.310 "name": "spare", 00:15:11.310 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 }, 00:15:11.310 { 00:15:11.310 "name": "BaseBdev2", 00:15:11.310 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 }, 00:15:11.310 { 00:15:11.310 "name": "BaseBdev3", 00:15:11.310 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 }, 00:15:11.310 { 00:15:11.310 "name": "BaseBdev4", 00:15:11.310 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 } 00:15:11.310 ] 00:15:11.310 }' 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.310 "name": "raid_bdev1", 00:15:11.310 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:11.310 "strip_size_kb": 64, 00:15:11.310 "state": "online", 00:15:11.310 "raid_level": "raid5f", 00:15:11.310 "superblock": true, 00:15:11.310 "num_base_bdevs": 4, 00:15:11.310 "num_base_bdevs_discovered": 4, 00:15:11.310 "num_base_bdevs_operational": 4, 00:15:11.310 "base_bdevs_list": [ 00:15:11.310 { 00:15:11.310 "name": "spare", 00:15:11.310 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 }, 00:15:11.310 { 00:15:11.310 "name": "BaseBdev2", 00:15:11.310 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 }, 00:15:11.310 { 00:15:11.310 "name": "BaseBdev3", 00:15:11.310 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 }, 00:15:11.310 { 00:15:11.310 "name": "BaseBdev4", 00:15:11.310 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:11.310 "is_configured": true, 00:15:11.310 "data_offset": 2048, 00:15:11.310 "data_size": 63488 00:15:11.310 } 00:15:11.310 ] 00:15:11.310 }' 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.310 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.570 "name": "raid_bdev1", 00:15:11.570 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:11.570 "strip_size_kb": 64, 00:15:11.570 "state": "online", 00:15:11.570 "raid_level": "raid5f", 00:15:11.570 "superblock": true, 00:15:11.570 "num_base_bdevs": 4, 00:15:11.570 "num_base_bdevs_discovered": 4, 00:15:11.570 "num_base_bdevs_operational": 4, 00:15:11.570 "base_bdevs_list": [ 00:15:11.570 { 00:15:11.570 "name": "spare", 00:15:11.570 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:11.570 "is_configured": true, 00:15:11.570 "data_offset": 2048, 00:15:11.570 "data_size": 63488 00:15:11.570 }, 00:15:11.570 { 00:15:11.570 "name": "BaseBdev2", 00:15:11.570 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:11.570 "is_configured": true, 00:15:11.570 "data_offset": 2048, 00:15:11.570 "data_size": 63488 00:15:11.570 }, 00:15:11.570 { 00:15:11.570 "name": "BaseBdev3", 00:15:11.570 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:11.570 "is_configured": true, 00:15:11.570 "data_offset": 2048, 00:15:11.570 "data_size": 63488 00:15:11.570 }, 00:15:11.570 { 00:15:11.570 "name": "BaseBdev4", 00:15:11.570 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:11.570 "is_configured": true, 00:15:11.570 "data_offset": 2048, 00:15:11.570 "data_size": 63488 00:15:11.570 } 00:15:11.570 ] 00:15:11.570 }' 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.570 15:31:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.829 [2024-11-26 15:31:10.266602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.829 [2024-11-26 15:31:10.266639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.829 [2024-11-26 15:31:10.266730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.829 [2024-11-26 15:31:10.266824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.829 [2024-11-26 15:31:10.266843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.829 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:12.089 /dev/nbd0 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.089 1+0 records in 00:15:12.089 1+0 records out 00:15:12.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423112 s, 9.7 MB/s 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:12.089 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:12.349 /dev/nbd1 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.349 1+0 records in 00:15:12.349 1+0 records out 00:15:12.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439124 s, 9.3 MB/s 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.349 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:12.609 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:12.609 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.609 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.609 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.609 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:12.609 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.609 15:31:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.869 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.869 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.869 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.869 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.869 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.869 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.869 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.870 [2024-11-26 15:31:11.334443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:12.870 [2024-11-26 15:31:11.334506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.870 [2024-11-26 15:31:11.334532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:12.870 [2024-11-26 15:31:11.334541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.870 [2024-11-26 15:31:11.337087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.870 [2024-11-26 15:31:11.337123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:12.870 [2024-11-26 15:31:11.337225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:12.870 [2024-11-26 15:31:11.337267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.870 [2024-11-26 15:31:11.337413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.870 [2024-11-26 15:31:11.337547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.870 [2024-11-26 15:31:11.337629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:12.870 spare 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.870 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.130 [2024-11-26 15:31:11.437704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:13.130 [2024-11-26 15:31:11.437735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:13.130 [2024-11-26 15:31:11.438035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:15:13.130 [2024-11-26 15:31:11.438574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:13.130 [2024-11-26 15:31:11.438593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:13.130 [2024-11-26 15:31:11.438774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.130 "name": "raid_bdev1", 00:15:13.130 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:13.130 "strip_size_kb": 64, 00:15:13.130 "state": "online", 00:15:13.130 "raid_level": "raid5f", 00:15:13.130 "superblock": true, 00:15:13.130 "num_base_bdevs": 4, 00:15:13.130 "num_base_bdevs_discovered": 4, 00:15:13.130 "num_base_bdevs_operational": 4, 00:15:13.130 "base_bdevs_list": [ 00:15:13.130 { 00:15:13.130 "name": "spare", 00:15:13.130 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:13.130 "is_configured": true, 00:15:13.130 "data_offset": 2048, 00:15:13.130 "data_size": 63488 00:15:13.130 }, 00:15:13.130 { 00:15:13.130 "name": "BaseBdev2", 00:15:13.130 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:13.130 "is_configured": true, 00:15:13.130 "data_offset": 2048, 00:15:13.130 "data_size": 63488 00:15:13.130 }, 00:15:13.130 { 00:15:13.130 "name": "BaseBdev3", 00:15:13.130 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:13.130 "is_configured": true, 00:15:13.130 "data_offset": 2048, 00:15:13.130 "data_size": 63488 00:15:13.130 }, 00:15:13.130 { 00:15:13.130 "name": "BaseBdev4", 00:15:13.130 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:13.130 "is_configured": true, 00:15:13.130 "data_offset": 2048, 00:15:13.130 "data_size": 63488 00:15:13.130 } 00:15:13.130 ] 00:15:13.130 }' 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.130 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.702 "name": "raid_bdev1", 00:15:13.702 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:13.702 "strip_size_kb": 64, 00:15:13.702 "state": "online", 00:15:13.702 "raid_level": "raid5f", 00:15:13.702 "superblock": true, 00:15:13.702 "num_base_bdevs": 4, 00:15:13.702 "num_base_bdevs_discovered": 4, 00:15:13.702 "num_base_bdevs_operational": 4, 00:15:13.702 "base_bdevs_list": [ 00:15:13.702 { 00:15:13.702 "name": "spare", 00:15:13.702 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:13.702 "is_configured": true, 00:15:13.702 "data_offset": 2048, 00:15:13.702 "data_size": 63488 00:15:13.702 }, 00:15:13.702 { 00:15:13.702 "name": "BaseBdev2", 00:15:13.702 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:13.702 "is_configured": true, 00:15:13.702 "data_offset": 2048, 00:15:13.702 "data_size": 63488 00:15:13.702 }, 00:15:13.702 { 00:15:13.702 "name": "BaseBdev3", 00:15:13.702 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:13.702 "is_configured": true, 00:15:13.702 "data_offset": 2048, 00:15:13.702 "data_size": 63488 00:15:13.702 }, 00:15:13.702 { 00:15:13.702 "name": "BaseBdev4", 00:15:13.702 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:13.702 "is_configured": true, 00:15:13.702 "data_offset": 2048, 00:15:13.702 "data_size": 63488 00:15:13.702 } 00:15:13.702 ] 00:15:13.702 }' 00:15:13.702 15:31:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.702 [2024-11-26 15:31:12.102907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.702 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.703 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.703 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.703 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.703 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.703 "name": "raid_bdev1", 00:15:13.703 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:13.703 "strip_size_kb": 64, 00:15:13.703 "state": "online", 00:15:13.703 "raid_level": "raid5f", 00:15:13.703 "superblock": true, 00:15:13.703 "num_base_bdevs": 4, 00:15:13.703 "num_base_bdevs_discovered": 3, 00:15:13.703 "num_base_bdevs_operational": 3, 00:15:13.703 "base_bdevs_list": [ 00:15:13.703 { 00:15:13.703 "name": null, 00:15:13.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.703 "is_configured": false, 00:15:13.703 "data_offset": 0, 00:15:13.703 "data_size": 63488 00:15:13.703 }, 00:15:13.703 { 00:15:13.703 "name": "BaseBdev2", 00:15:13.703 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:13.703 "is_configured": true, 00:15:13.703 "data_offset": 2048, 00:15:13.703 "data_size": 63488 00:15:13.703 }, 00:15:13.703 { 00:15:13.703 "name": "BaseBdev3", 00:15:13.703 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:13.703 "is_configured": true, 00:15:13.703 "data_offset": 2048, 00:15:13.703 "data_size": 63488 00:15:13.703 }, 00:15:13.703 { 00:15:13.703 "name": "BaseBdev4", 00:15:13.703 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:13.703 "is_configured": true, 00:15:13.703 "data_offset": 2048, 00:15:13.703 "data_size": 63488 00:15:13.703 } 00:15:13.703 ] 00:15:13.703 }' 00:15:13.703 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.703 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.273 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.273 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.273 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.273 [2024-11-26 15:31:12.499006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.273 [2024-11-26 15:31:12.499210] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.273 [2024-11-26 15:31:12.499238] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:14.273 [2024-11-26 15:31:12.499274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.273 [2024-11-26 15:31:12.506305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000496b0 00:15:14.273 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.273 15:31:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:14.273 [2024-11-26 15:31:12.508846] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.212 "name": "raid_bdev1", 00:15:15.212 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:15.212 "strip_size_kb": 64, 00:15:15.212 "state": "online", 00:15:15.212 "raid_level": "raid5f", 00:15:15.212 "superblock": true, 00:15:15.212 "num_base_bdevs": 4, 00:15:15.212 "num_base_bdevs_discovered": 4, 00:15:15.212 "num_base_bdevs_operational": 4, 00:15:15.212 "process": { 00:15:15.212 "type": "rebuild", 00:15:15.212 "target": "spare", 00:15:15.212 "progress": { 00:15:15.212 "blocks": 19200, 00:15:15.212 "percent": 10 00:15:15.212 } 00:15:15.212 }, 00:15:15.212 "base_bdevs_list": [ 00:15:15.212 { 00:15:15.212 "name": "spare", 00:15:15.212 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:15.212 "is_configured": true, 00:15:15.212 "data_offset": 2048, 00:15:15.212 "data_size": 63488 00:15:15.212 }, 00:15:15.212 { 00:15:15.212 "name": "BaseBdev2", 00:15:15.212 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:15.212 "is_configured": true, 00:15:15.212 "data_offset": 2048, 00:15:15.212 "data_size": 63488 00:15:15.212 }, 00:15:15.212 { 00:15:15.212 "name": "BaseBdev3", 00:15:15.212 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:15.212 "is_configured": true, 00:15:15.212 "data_offset": 2048, 00:15:15.212 "data_size": 63488 00:15:15.212 }, 00:15:15.212 { 00:15:15.212 "name": "BaseBdev4", 00:15:15.212 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:15.212 "is_configured": true, 00:15:15.212 "data_offset": 2048, 00:15:15.212 "data_size": 63488 00:15:15.212 } 00:15:15.212 ] 00:15:15.212 }' 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.212 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:15.213 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.213 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.213 [2024-11-26 15:31:13.646661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.472 [2024-11-26 15:31:13.717065] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:15.472 [2024-11-26 15:31:13.717122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.472 [2024-11-26 15:31:13.717137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.472 [2024-11-26 15:31:13.717147] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.472 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.472 "name": "raid_bdev1", 00:15:15.472 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:15.472 "strip_size_kb": 64, 00:15:15.472 "state": "online", 00:15:15.472 "raid_level": "raid5f", 00:15:15.472 "superblock": true, 00:15:15.472 "num_base_bdevs": 4, 00:15:15.472 "num_base_bdevs_discovered": 3, 00:15:15.472 "num_base_bdevs_operational": 3, 00:15:15.472 "base_bdevs_list": [ 00:15:15.472 { 00:15:15.472 "name": null, 00:15:15.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.472 "is_configured": false, 00:15:15.472 "data_offset": 0, 00:15:15.472 "data_size": 63488 00:15:15.472 }, 00:15:15.472 { 00:15:15.472 "name": "BaseBdev2", 00:15:15.472 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:15.473 "is_configured": true, 00:15:15.473 "data_offset": 2048, 00:15:15.473 "data_size": 63488 00:15:15.473 }, 00:15:15.473 { 00:15:15.473 "name": "BaseBdev3", 00:15:15.473 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:15.473 "is_configured": true, 00:15:15.473 "data_offset": 2048, 00:15:15.473 "data_size": 63488 00:15:15.473 }, 00:15:15.473 { 00:15:15.473 "name": "BaseBdev4", 00:15:15.473 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:15.473 "is_configured": true, 00:15:15.473 "data_offset": 2048, 00:15:15.473 "data_size": 63488 00:15:15.473 } 00:15:15.473 ] 00:15:15.473 }' 00:15:15.473 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.473 15:31:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.732 15:31:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.732 15:31:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.732 15:31:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.732 [2024-11-26 15:31:14.157593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.732 [2024-11-26 15:31:14.157650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.732 [2024-11-26 15:31:14.157676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:15.732 [2024-11-26 15:31:14.157689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.732 [2024-11-26 15:31:14.158187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.732 [2024-11-26 15:31:14.158225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.732 [2024-11-26 15:31:14.158310] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:15.732 [2024-11-26 15:31:14.158333] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.732 [2024-11-26 15:31:14.158344] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.732 [2024-11-26 15:31:14.158382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.732 [2024-11-26 15:31:14.163519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049780 00:15:15.732 spare 00:15:15.732 15:31:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.732 15:31:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:15.732 [2024-11-26 15:31:14.165951] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.114 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.114 "name": "raid_bdev1", 00:15:17.114 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:17.114 "strip_size_kb": 64, 00:15:17.114 "state": "online", 00:15:17.114 "raid_level": "raid5f", 00:15:17.114 "superblock": true, 00:15:17.114 "num_base_bdevs": 4, 00:15:17.114 "num_base_bdevs_discovered": 4, 00:15:17.114 "num_base_bdevs_operational": 4, 00:15:17.114 "process": { 00:15:17.114 "type": "rebuild", 00:15:17.114 "target": "spare", 00:15:17.114 "progress": { 00:15:17.114 "blocks": 19200, 00:15:17.114 "percent": 10 00:15:17.114 } 00:15:17.114 }, 00:15:17.114 "base_bdevs_list": [ 00:15:17.114 { 00:15:17.115 "name": "spare", 00:15:17.115 "uuid": "a51c5572-3559-54a6-9e83-b2eb96da26e8", 00:15:17.115 "is_configured": true, 00:15:17.115 "data_offset": 2048, 00:15:17.115 "data_size": 63488 00:15:17.115 }, 00:15:17.115 { 00:15:17.115 "name": "BaseBdev2", 00:15:17.115 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:17.115 "is_configured": true, 00:15:17.115 "data_offset": 2048, 00:15:17.115 "data_size": 63488 00:15:17.115 }, 00:15:17.115 { 00:15:17.115 "name": "BaseBdev3", 00:15:17.115 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:17.115 "is_configured": true, 00:15:17.115 "data_offset": 2048, 00:15:17.115 "data_size": 63488 00:15:17.115 }, 00:15:17.115 { 00:15:17.115 "name": "BaseBdev4", 00:15:17.115 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:17.115 "is_configured": true, 00:15:17.115 "data_offset": 2048, 00:15:17.115 "data_size": 63488 00:15:17.115 } 00:15:17.115 ] 00:15:17.115 }' 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.115 [2024-11-26 15:31:15.327815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.115 [2024-11-26 15:31:15.374276] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.115 [2024-11-26 15:31:15.374327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.115 [2024-11-26 15:31:15.374346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.115 [2024-11-26 15:31:15.374353] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.115 "name": "raid_bdev1", 00:15:17.115 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:17.115 "strip_size_kb": 64, 00:15:17.115 "state": "online", 00:15:17.115 "raid_level": "raid5f", 00:15:17.115 "superblock": true, 00:15:17.115 "num_base_bdevs": 4, 00:15:17.115 "num_base_bdevs_discovered": 3, 00:15:17.115 "num_base_bdevs_operational": 3, 00:15:17.115 "base_bdevs_list": [ 00:15:17.115 { 00:15:17.115 "name": null, 00:15:17.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.115 "is_configured": false, 00:15:17.115 "data_offset": 0, 00:15:17.115 "data_size": 63488 00:15:17.115 }, 00:15:17.115 { 00:15:17.115 "name": "BaseBdev2", 00:15:17.115 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:17.115 "is_configured": true, 00:15:17.115 "data_offset": 2048, 00:15:17.115 "data_size": 63488 00:15:17.115 }, 00:15:17.115 { 00:15:17.115 "name": "BaseBdev3", 00:15:17.115 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:17.115 "is_configured": true, 00:15:17.115 "data_offset": 2048, 00:15:17.115 "data_size": 63488 00:15:17.115 }, 00:15:17.115 { 00:15:17.115 "name": "BaseBdev4", 00:15:17.115 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:17.115 "is_configured": true, 00:15:17.115 "data_offset": 2048, 00:15:17.115 "data_size": 63488 00:15:17.115 } 00:15:17.115 ] 00:15:17.115 }' 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.115 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.376 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.636 "name": "raid_bdev1", 00:15:17.636 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:17.636 "strip_size_kb": 64, 00:15:17.636 "state": "online", 00:15:17.636 "raid_level": "raid5f", 00:15:17.636 "superblock": true, 00:15:17.636 "num_base_bdevs": 4, 00:15:17.636 "num_base_bdevs_discovered": 3, 00:15:17.636 "num_base_bdevs_operational": 3, 00:15:17.636 "base_bdevs_list": [ 00:15:17.636 { 00:15:17.636 "name": null, 00:15:17.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.636 "is_configured": false, 00:15:17.636 "data_offset": 0, 00:15:17.636 "data_size": 63488 00:15:17.636 }, 00:15:17.636 { 00:15:17.636 "name": "BaseBdev2", 00:15:17.636 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:17.636 "is_configured": true, 00:15:17.636 "data_offset": 2048, 00:15:17.636 "data_size": 63488 00:15:17.636 }, 00:15:17.636 { 00:15:17.636 "name": "BaseBdev3", 00:15:17.636 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:17.636 "is_configured": true, 00:15:17.636 "data_offset": 2048, 00:15:17.636 "data_size": 63488 00:15:17.636 }, 00:15:17.636 { 00:15:17.636 "name": "BaseBdev4", 00:15:17.636 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:17.636 "is_configured": true, 00:15:17.636 "data_offset": 2048, 00:15:17.636 "data_size": 63488 00:15:17.636 } 00:15:17.636 ] 00:15:17.636 }' 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.636 [2024-11-26 15:31:15.991315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:17.636 [2024-11-26 15:31:15.991367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.636 [2024-11-26 15:31:15.991390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:17.636 [2024-11-26 15:31:15.991399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.636 [2024-11-26 15:31:15.991881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.636 [2024-11-26 15:31:15.991906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.636 [2024-11-26 15:31:15.991984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:17.636 [2024-11-26 15:31:15.992002] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:17.636 [2024-11-26 15:31:15.992015] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:17.636 [2024-11-26 15:31:15.992026] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:17.636 BaseBdev1 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.636 15:31:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.576 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.836 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.836 "name": "raid_bdev1", 00:15:18.836 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:18.836 "strip_size_kb": 64, 00:15:18.836 "state": "online", 00:15:18.836 "raid_level": "raid5f", 00:15:18.836 "superblock": true, 00:15:18.836 "num_base_bdevs": 4, 00:15:18.836 "num_base_bdevs_discovered": 3, 00:15:18.836 "num_base_bdevs_operational": 3, 00:15:18.836 "base_bdevs_list": [ 00:15:18.836 { 00:15:18.836 "name": null, 00:15:18.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.836 "is_configured": false, 00:15:18.836 "data_offset": 0, 00:15:18.836 "data_size": 63488 00:15:18.837 }, 00:15:18.837 { 00:15:18.837 "name": "BaseBdev2", 00:15:18.837 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:18.837 "is_configured": true, 00:15:18.837 "data_offset": 2048, 00:15:18.837 "data_size": 63488 00:15:18.837 }, 00:15:18.837 { 00:15:18.837 "name": "BaseBdev3", 00:15:18.837 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:18.837 "is_configured": true, 00:15:18.837 "data_offset": 2048, 00:15:18.837 "data_size": 63488 00:15:18.837 }, 00:15:18.837 { 00:15:18.837 "name": "BaseBdev4", 00:15:18.837 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:18.837 "is_configured": true, 00:15:18.837 "data_offset": 2048, 00:15:18.837 "data_size": 63488 00:15:18.837 } 00:15:18.837 ] 00:15:18.837 }' 00:15:18.837 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.837 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.097 "name": "raid_bdev1", 00:15:19.097 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:19.097 "strip_size_kb": 64, 00:15:19.097 "state": "online", 00:15:19.097 "raid_level": "raid5f", 00:15:19.097 "superblock": true, 00:15:19.097 "num_base_bdevs": 4, 00:15:19.097 "num_base_bdevs_discovered": 3, 00:15:19.097 "num_base_bdevs_operational": 3, 00:15:19.097 "base_bdevs_list": [ 00:15:19.097 { 00:15:19.097 "name": null, 00:15:19.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.097 "is_configured": false, 00:15:19.097 "data_offset": 0, 00:15:19.097 "data_size": 63488 00:15:19.097 }, 00:15:19.097 { 00:15:19.097 "name": "BaseBdev2", 00:15:19.097 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:19.097 "is_configured": true, 00:15:19.097 "data_offset": 2048, 00:15:19.097 "data_size": 63488 00:15:19.097 }, 00:15:19.097 { 00:15:19.097 "name": "BaseBdev3", 00:15:19.097 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:19.097 "is_configured": true, 00:15:19.097 "data_offset": 2048, 00:15:19.097 "data_size": 63488 00:15:19.097 }, 00:15:19.097 { 00:15:19.097 "name": "BaseBdev4", 00:15:19.097 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:19.097 "is_configured": true, 00:15:19.097 "data_offset": 2048, 00:15:19.097 "data_size": 63488 00:15:19.097 } 00:15:19.097 ] 00:15:19.097 }' 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.097 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.357 [2024-11-26 15:31:17.583708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.357 [2024-11-26 15:31:17.583832] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.357 [2024-11-26 15:31:17.583850] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.357 request: 00:15:19.357 { 00:15:19.357 "base_bdev": "BaseBdev1", 00:15:19.357 "raid_bdev": "raid_bdev1", 00:15:19.357 "method": "bdev_raid_add_base_bdev", 00:15:19.357 "req_id": 1 00:15:19.357 } 00:15:19.357 Got JSON-RPC error response 00:15:19.357 response: 00:15:19.357 { 00:15:19.357 "code": -22, 00:15:19.357 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:19.357 } 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.357 15:31:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.297 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.297 "name": "raid_bdev1", 00:15:20.297 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:20.297 "strip_size_kb": 64, 00:15:20.297 "state": "online", 00:15:20.297 "raid_level": "raid5f", 00:15:20.297 "superblock": true, 00:15:20.298 "num_base_bdevs": 4, 00:15:20.298 "num_base_bdevs_discovered": 3, 00:15:20.298 "num_base_bdevs_operational": 3, 00:15:20.298 "base_bdevs_list": [ 00:15:20.298 { 00:15:20.298 "name": null, 00:15:20.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.298 "is_configured": false, 00:15:20.298 "data_offset": 0, 00:15:20.298 "data_size": 63488 00:15:20.298 }, 00:15:20.298 { 00:15:20.298 "name": "BaseBdev2", 00:15:20.298 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:20.298 "is_configured": true, 00:15:20.298 "data_offset": 2048, 00:15:20.298 "data_size": 63488 00:15:20.298 }, 00:15:20.298 { 00:15:20.298 "name": "BaseBdev3", 00:15:20.298 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:20.298 "is_configured": true, 00:15:20.298 "data_offset": 2048, 00:15:20.298 "data_size": 63488 00:15:20.298 }, 00:15:20.298 { 00:15:20.298 "name": "BaseBdev4", 00:15:20.298 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:20.298 "is_configured": true, 00:15:20.298 "data_offset": 2048, 00:15:20.298 "data_size": 63488 00:15:20.298 } 00:15:20.298 ] 00:15:20.298 }' 00:15:20.298 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.298 15:31:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.875 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.875 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.875 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.875 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.875 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.876 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.876 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.876 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.876 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.876 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.876 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.876 "name": "raid_bdev1", 00:15:20.876 "uuid": "d0970634-aef4-4323-9ab6-2a3f4303b312", 00:15:20.876 "strip_size_kb": 64, 00:15:20.876 "state": "online", 00:15:20.876 "raid_level": "raid5f", 00:15:20.876 "superblock": true, 00:15:20.876 "num_base_bdevs": 4, 00:15:20.876 "num_base_bdevs_discovered": 3, 00:15:20.876 "num_base_bdevs_operational": 3, 00:15:20.876 "base_bdevs_list": [ 00:15:20.876 { 00:15:20.876 "name": null, 00:15:20.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.876 "is_configured": false, 00:15:20.876 "data_offset": 0, 00:15:20.876 "data_size": 63488 00:15:20.876 }, 00:15:20.876 { 00:15:20.876 "name": "BaseBdev2", 00:15:20.876 "uuid": "de65fe43-ddc2-5888-a557-31acee511d0f", 00:15:20.876 "is_configured": true, 00:15:20.876 "data_offset": 2048, 00:15:20.876 "data_size": 63488 00:15:20.876 }, 00:15:20.876 { 00:15:20.876 "name": "BaseBdev3", 00:15:20.876 "uuid": "4090ea16-4e0f-57a4-828c-0b4c584811d5", 00:15:20.876 "is_configured": true, 00:15:20.876 "data_offset": 2048, 00:15:20.876 "data_size": 63488 00:15:20.876 }, 00:15:20.876 { 00:15:20.876 "name": "BaseBdev4", 00:15:20.876 "uuid": "2536ca16-bb98-5cbc-a5c2-f5a0e32b903e", 00:15:20.876 "is_configured": true, 00:15:20.876 "data_offset": 2048, 00:15:20.877 "data_size": 63488 00:15:20.877 } 00:15:20.877 ] 00:15:20.877 }' 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 97026 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 97026 ']' 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 97026 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.877 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97026 00:15:20.877 killing process with pid 97026 00:15:20.877 Received shutdown signal, test time was about 60.000000 seconds 00:15:20.877 00:15:20.877 Latency(us) 00:15:20.877 [2024-11-26T15:31:19.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.877 [2024-11-26T15:31:19.356Z] =================================================================================================================== 00:15:20.877 [2024-11-26T15:31:19.356Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.878 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.878 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.878 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97026' 00:15:20.878 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 97026 00:15:20.878 [2024-11-26 15:31:19.218601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.878 [2024-11-26 15:31:19.218716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.878 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 97026 00:15:20.878 [2024-11-26 15:31:19.218783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.878 [2024-11-26 15:31:19.218797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:20.878 [2024-11-26 15:31:19.309849] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.458 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:21.458 00:15:21.458 real 0m25.594s 00:15:21.458 user 0m32.262s 00:15:21.458 sys 0m3.402s 00:15:21.458 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.458 15:31:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 ************************************ 00:15:21.458 END TEST raid5f_rebuild_test_sb 00:15:21.458 ************************************ 00:15:21.458 15:31:19 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:21.458 15:31:19 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:21.458 15:31:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:21.458 15:31:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.458 15:31:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 ************************************ 00:15:21.458 START TEST raid_state_function_test_sb_4k 00:15:21.458 ************************************ 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=97827 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:21.458 Process raid pid: 97827 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97827' 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 97827 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 97827 ']' 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.458 15:31:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.458 [2024-11-26 15:31:19.812914] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:21.458 [2024-11-26 15:31:19.813055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.719 [2024-11-26 15:31:19.950632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:21.719 [2024-11-26 15:31:19.990152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.719 [2024-11-26 15:31:20.029948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.719 [2024-11-26 15:31:20.106392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.719 [2024-11-26 15:31:20.106428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.289 [2024-11-26 15:31:20.650998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.289 [2024-11-26 15:31:20.651046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.289 [2024-11-26 15:31:20.651059] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.289 [2024-11-26 15:31:20.651067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.289 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.290 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.290 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.290 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.290 "name": "Existed_Raid", 00:15:22.290 "uuid": "1ee88f86-af03-4831-9a7f-ed9af4b5ed68", 00:15:22.290 "strip_size_kb": 0, 00:15:22.290 "state": "configuring", 00:15:22.290 "raid_level": "raid1", 00:15:22.290 "superblock": true, 00:15:22.290 "num_base_bdevs": 2, 00:15:22.290 "num_base_bdevs_discovered": 0, 00:15:22.290 "num_base_bdevs_operational": 2, 00:15:22.290 "base_bdevs_list": [ 00:15:22.290 { 00:15:22.290 "name": "BaseBdev1", 00:15:22.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.290 "is_configured": false, 00:15:22.290 "data_offset": 0, 00:15:22.290 "data_size": 0 00:15:22.290 }, 00:15:22.290 { 00:15:22.290 "name": "BaseBdev2", 00:15:22.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.290 "is_configured": false, 00:15:22.290 "data_offset": 0, 00:15:22.290 "data_size": 0 00:15:22.290 } 00:15:22.290 ] 00:15:22.290 }' 00:15:22.290 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.290 15:31:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.872 [2024-11-26 15:31:21.134997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.872 [2024-11-26 15:31:21.135041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.872 [2024-11-26 15:31:21.147035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.872 [2024-11-26 15:31:21.147067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.872 [2024-11-26 15:31:21.147078] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.872 [2024-11-26 15:31:21.147084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.872 [2024-11-26 15:31:21.174095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.872 BaseBdev1 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.872 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.872 [ 00:15:22.872 { 00:15:22.872 "name": "BaseBdev1", 00:15:22.872 "aliases": [ 00:15:22.872 "07452f6a-54e9-419e-a67b-4a74fd2c0e76" 00:15:22.872 ], 00:15:22.872 "product_name": "Malloc disk", 00:15:22.872 "block_size": 4096, 00:15:22.872 "num_blocks": 8192, 00:15:22.872 "uuid": "07452f6a-54e9-419e-a67b-4a74fd2c0e76", 00:15:22.872 "assigned_rate_limits": { 00:15:22.872 "rw_ios_per_sec": 0, 00:15:22.872 "rw_mbytes_per_sec": 0, 00:15:22.872 "r_mbytes_per_sec": 0, 00:15:22.872 "w_mbytes_per_sec": 0 00:15:22.872 }, 00:15:22.872 "claimed": true, 00:15:22.872 "claim_type": "exclusive_write", 00:15:22.872 "zoned": false, 00:15:22.872 "supported_io_types": { 00:15:22.872 "read": true, 00:15:22.872 "write": true, 00:15:22.872 "unmap": true, 00:15:22.872 "flush": true, 00:15:22.872 "reset": true, 00:15:22.872 "nvme_admin": false, 00:15:22.872 "nvme_io": false, 00:15:22.872 "nvme_io_md": false, 00:15:22.872 "write_zeroes": true, 00:15:22.872 "zcopy": true, 00:15:22.872 "get_zone_info": false, 00:15:22.872 "zone_management": false, 00:15:22.872 "zone_append": false, 00:15:22.872 "compare": false, 00:15:22.872 "compare_and_write": false, 00:15:22.872 "abort": true, 00:15:22.872 "seek_hole": false, 00:15:22.873 "seek_data": false, 00:15:22.873 "copy": true, 00:15:22.873 "nvme_iov_md": false 00:15:22.873 }, 00:15:22.873 "memory_domains": [ 00:15:22.873 { 00:15:22.873 "dma_device_id": "system", 00:15:22.873 "dma_device_type": 1 00:15:22.873 }, 00:15:22.873 { 00:15:22.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.873 "dma_device_type": 2 00:15:22.873 } 00:15:22.873 ], 00:15:22.873 "driver_specific": {} 00:15:22.873 } 00:15:22.873 ] 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.873 "name": "Existed_Raid", 00:15:22.873 "uuid": "479ba6b1-90f4-438e-8f92-00823a27b46f", 00:15:22.873 "strip_size_kb": 0, 00:15:22.873 "state": "configuring", 00:15:22.873 "raid_level": "raid1", 00:15:22.873 "superblock": true, 00:15:22.873 "num_base_bdevs": 2, 00:15:22.873 "num_base_bdevs_discovered": 1, 00:15:22.873 "num_base_bdevs_operational": 2, 00:15:22.873 "base_bdevs_list": [ 00:15:22.873 { 00:15:22.873 "name": "BaseBdev1", 00:15:22.873 "uuid": "07452f6a-54e9-419e-a67b-4a74fd2c0e76", 00:15:22.873 "is_configured": true, 00:15:22.873 "data_offset": 256, 00:15:22.873 "data_size": 7936 00:15:22.873 }, 00:15:22.873 { 00:15:22.873 "name": "BaseBdev2", 00:15:22.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.873 "is_configured": false, 00:15:22.873 "data_offset": 0, 00:15:22.873 "data_size": 0 00:15:22.873 } 00:15:22.873 ] 00:15:22.873 }' 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.873 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.474 [2024-11-26 15:31:21.654234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.474 [2024-11-26 15:31:21.654329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.474 [2024-11-26 15:31:21.666276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.474 [2024-11-26 15:31:21.668470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.474 [2024-11-26 15:31:21.668562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.474 "name": "Existed_Raid", 00:15:23.474 "uuid": "0f291b7c-d8f9-4203-8510-d46ebc54b330", 00:15:23.474 "strip_size_kb": 0, 00:15:23.474 "state": "configuring", 00:15:23.474 "raid_level": "raid1", 00:15:23.474 "superblock": true, 00:15:23.474 "num_base_bdevs": 2, 00:15:23.474 "num_base_bdevs_discovered": 1, 00:15:23.474 "num_base_bdevs_operational": 2, 00:15:23.474 "base_bdevs_list": [ 00:15:23.474 { 00:15:23.474 "name": "BaseBdev1", 00:15:23.474 "uuid": "07452f6a-54e9-419e-a67b-4a74fd2c0e76", 00:15:23.474 "is_configured": true, 00:15:23.474 "data_offset": 256, 00:15:23.474 "data_size": 7936 00:15:23.474 }, 00:15:23.474 { 00:15:23.474 "name": "BaseBdev2", 00:15:23.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.474 "is_configured": false, 00:15:23.474 "data_offset": 0, 00:15:23.474 "data_size": 0 00:15:23.474 } 00:15:23.474 ] 00:15:23.474 }' 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.474 15:31:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.734 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:23.734 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.734 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.734 [2024-11-26 15:31:22.167099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.734 [2024-11-26 15:31:22.167346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:23.734 [2024-11-26 15:31:22.167365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:23.734 BaseBdev2 00:15:23.734 [2024-11-26 15:31:22.167672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:23.735 [2024-11-26 15:31:22.167838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:23.735 [2024-11-26 15:31:22.167849] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:23.735 [2024-11-26 15:31:22.167983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.735 [ 00:15:23.735 { 00:15:23.735 "name": "BaseBdev2", 00:15:23.735 "aliases": [ 00:15:23.735 "792a79db-b0f3-45cc-825f-d4034fbf7454" 00:15:23.735 ], 00:15:23.735 "product_name": "Malloc disk", 00:15:23.735 "block_size": 4096, 00:15:23.735 "num_blocks": 8192, 00:15:23.735 "uuid": "792a79db-b0f3-45cc-825f-d4034fbf7454", 00:15:23.735 "assigned_rate_limits": { 00:15:23.735 "rw_ios_per_sec": 0, 00:15:23.735 "rw_mbytes_per_sec": 0, 00:15:23.735 "r_mbytes_per_sec": 0, 00:15:23.735 "w_mbytes_per_sec": 0 00:15:23.735 }, 00:15:23.735 "claimed": true, 00:15:23.735 "claim_type": "exclusive_write", 00:15:23.735 "zoned": false, 00:15:23.735 "supported_io_types": { 00:15:23.735 "read": true, 00:15:23.735 "write": true, 00:15:23.735 "unmap": true, 00:15:23.735 "flush": true, 00:15:23.735 "reset": true, 00:15:23.735 "nvme_admin": false, 00:15:23.735 "nvme_io": false, 00:15:23.735 "nvme_io_md": false, 00:15:23.735 "write_zeroes": true, 00:15:23.735 "zcopy": true, 00:15:23.735 "get_zone_info": false, 00:15:23.735 "zone_management": false, 00:15:23.735 "zone_append": false, 00:15:23.735 "compare": false, 00:15:23.735 "compare_and_write": false, 00:15:23.735 "abort": true, 00:15:23.735 "seek_hole": false, 00:15:23.735 "seek_data": false, 00:15:23.735 "copy": true, 00:15:23.735 "nvme_iov_md": false 00:15:23.735 }, 00:15:23.735 "memory_domains": [ 00:15:23.735 { 00:15:23.735 "dma_device_id": "system", 00:15:23.735 "dma_device_type": 1 00:15:23.735 }, 00:15:23.735 { 00:15:23.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.735 "dma_device_type": 2 00:15:23.735 } 00:15:23.735 ], 00:15:23.735 "driver_specific": {} 00:15:23.735 } 00:15:23.735 ] 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.735 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.995 "name": "Existed_Raid", 00:15:23.995 "uuid": "0f291b7c-d8f9-4203-8510-d46ebc54b330", 00:15:23.995 "strip_size_kb": 0, 00:15:23.995 "state": "online", 00:15:23.995 "raid_level": "raid1", 00:15:23.995 "superblock": true, 00:15:23.995 "num_base_bdevs": 2, 00:15:23.995 "num_base_bdevs_discovered": 2, 00:15:23.995 "num_base_bdevs_operational": 2, 00:15:23.995 "base_bdevs_list": [ 00:15:23.995 { 00:15:23.995 "name": "BaseBdev1", 00:15:23.995 "uuid": "07452f6a-54e9-419e-a67b-4a74fd2c0e76", 00:15:23.995 "is_configured": true, 00:15:23.995 "data_offset": 256, 00:15:23.995 "data_size": 7936 00:15:23.995 }, 00:15:23.995 { 00:15:23.995 "name": "BaseBdev2", 00:15:23.995 "uuid": "792a79db-b0f3-45cc-825f-d4034fbf7454", 00:15:23.995 "is_configured": true, 00:15:23.995 "data_offset": 256, 00:15:23.995 "data_size": 7936 00:15:23.995 } 00:15:23.995 ] 00:15:23.995 }' 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.995 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.255 [2024-11-26 15:31:22.643512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.255 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:24.255 "name": "Existed_Raid", 00:15:24.255 "aliases": [ 00:15:24.255 "0f291b7c-d8f9-4203-8510-d46ebc54b330" 00:15:24.255 ], 00:15:24.255 "product_name": "Raid Volume", 00:15:24.255 "block_size": 4096, 00:15:24.255 "num_blocks": 7936, 00:15:24.255 "uuid": "0f291b7c-d8f9-4203-8510-d46ebc54b330", 00:15:24.255 "assigned_rate_limits": { 00:15:24.255 "rw_ios_per_sec": 0, 00:15:24.255 "rw_mbytes_per_sec": 0, 00:15:24.255 "r_mbytes_per_sec": 0, 00:15:24.255 "w_mbytes_per_sec": 0 00:15:24.255 }, 00:15:24.255 "claimed": false, 00:15:24.255 "zoned": false, 00:15:24.255 "supported_io_types": { 00:15:24.255 "read": true, 00:15:24.255 "write": true, 00:15:24.255 "unmap": false, 00:15:24.255 "flush": false, 00:15:24.255 "reset": true, 00:15:24.255 "nvme_admin": false, 00:15:24.255 "nvme_io": false, 00:15:24.255 "nvme_io_md": false, 00:15:24.255 "write_zeroes": true, 00:15:24.255 "zcopy": false, 00:15:24.255 "get_zone_info": false, 00:15:24.255 "zone_management": false, 00:15:24.255 "zone_append": false, 00:15:24.255 "compare": false, 00:15:24.255 "compare_and_write": false, 00:15:24.255 "abort": false, 00:15:24.255 "seek_hole": false, 00:15:24.255 "seek_data": false, 00:15:24.255 "copy": false, 00:15:24.255 "nvme_iov_md": false 00:15:24.255 }, 00:15:24.255 "memory_domains": [ 00:15:24.255 { 00:15:24.255 "dma_device_id": "system", 00:15:24.255 "dma_device_type": 1 00:15:24.255 }, 00:15:24.255 { 00:15:24.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.255 "dma_device_type": 2 00:15:24.255 }, 00:15:24.255 { 00:15:24.255 "dma_device_id": "system", 00:15:24.255 "dma_device_type": 1 00:15:24.256 }, 00:15:24.256 { 00:15:24.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.256 "dma_device_type": 2 00:15:24.256 } 00:15:24.256 ], 00:15:24.256 "driver_specific": { 00:15:24.256 "raid": { 00:15:24.256 "uuid": "0f291b7c-d8f9-4203-8510-d46ebc54b330", 00:15:24.256 "strip_size_kb": 0, 00:15:24.256 "state": "online", 00:15:24.256 "raid_level": "raid1", 00:15:24.256 "superblock": true, 00:15:24.256 "num_base_bdevs": 2, 00:15:24.256 "num_base_bdevs_discovered": 2, 00:15:24.256 "num_base_bdevs_operational": 2, 00:15:24.256 "base_bdevs_list": [ 00:15:24.256 { 00:15:24.256 "name": "BaseBdev1", 00:15:24.256 "uuid": "07452f6a-54e9-419e-a67b-4a74fd2c0e76", 00:15:24.256 "is_configured": true, 00:15:24.256 "data_offset": 256, 00:15:24.256 "data_size": 7936 00:15:24.256 }, 00:15:24.256 { 00:15:24.256 "name": "BaseBdev2", 00:15:24.256 "uuid": "792a79db-b0f3-45cc-825f-d4034fbf7454", 00:15:24.256 "is_configured": true, 00:15:24.256 "data_offset": 256, 00:15:24.256 "data_size": 7936 00:15:24.256 } 00:15:24.256 ] 00:15:24.256 } 00:15:24.256 } 00:15:24.256 }' 00:15:24.256 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:24.516 BaseBdev2' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.516 [2024-11-26 15:31:22.875388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:24.516 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.517 "name": "Existed_Raid", 00:15:24.517 "uuid": "0f291b7c-d8f9-4203-8510-d46ebc54b330", 00:15:24.517 "strip_size_kb": 0, 00:15:24.517 "state": "online", 00:15:24.517 "raid_level": "raid1", 00:15:24.517 "superblock": true, 00:15:24.517 "num_base_bdevs": 2, 00:15:24.517 "num_base_bdevs_discovered": 1, 00:15:24.517 "num_base_bdevs_operational": 1, 00:15:24.517 "base_bdevs_list": [ 00:15:24.517 { 00:15:24.517 "name": null, 00:15:24.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.517 "is_configured": false, 00:15:24.517 "data_offset": 0, 00:15:24.517 "data_size": 7936 00:15:24.517 }, 00:15:24.517 { 00:15:24.517 "name": "BaseBdev2", 00:15:24.517 "uuid": "792a79db-b0f3-45cc-825f-d4034fbf7454", 00:15:24.517 "is_configured": true, 00:15:24.517 "data_offset": 256, 00:15:24.517 "data_size": 7936 00:15:24.517 } 00:15:24.517 ] 00:15:24.517 }' 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.517 15:31:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.087 [2024-11-26 15:31:23.380280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.087 [2024-11-26 15:31:23.380446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.087 [2024-11-26 15:31:23.401463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.087 [2024-11-26 15:31:23.401593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.087 [2024-11-26 15:31:23.401636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 97827 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 97827 ']' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 97827 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97827 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97827' 00:15:25.087 killing process with pid 97827 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 97827 00:15:25.087 [2024-11-26 15:31:23.501861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.087 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 97827 00:15:25.087 [2024-11-26 15:31:23.503409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.659 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:25.659 00:15:25.659 real 0m4.126s 00:15:25.659 user 0m6.308s 00:15:25.659 sys 0m0.983s 00:15:25.659 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.659 ************************************ 00:15:25.659 END TEST raid_state_function_test_sb_4k 00:15:25.659 ************************************ 00:15:25.659 15:31:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.659 15:31:23 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:25.659 15:31:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:25.659 15:31:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.659 15:31:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.659 ************************************ 00:15:25.659 START TEST raid_superblock_test_4k 00:15:25.659 ************************************ 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=98069 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 98069 00:15:25.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 98069 ']' 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.659 15:31:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.659 [2024-11-26 15:31:24.013284] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:25.659 [2024-11-26 15:31:24.013414] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98069 ] 00:15:25.919 [2024-11-26 15:31:24.154112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:25.919 [2024-11-26 15:31:24.191331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.919 [2024-11-26 15:31:24.232825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.919 [2024-11-26 15:31:24.310437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.919 [2024-11-26 15:31:24.310470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.491 malloc1 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.491 [2024-11-26 15:31:24.857728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:26.491 [2024-11-26 15:31:24.857880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.491 [2024-11-26 15:31:24.857931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.491 [2024-11-26 15:31:24.857979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.491 [2024-11-26 15:31:24.860406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.491 [2024-11-26 15:31:24.860476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:26.491 pt1 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.491 malloc2 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.491 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.491 [2024-11-26 15:31:24.892279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.491 [2024-11-26 15:31:24.892393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.491 [2024-11-26 15:31:24.892427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.491 [2024-11-26 15:31:24.892454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.492 [2024-11-26 15:31:24.894847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.492 [2024-11-26 15:31:24.894917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.492 pt2 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.492 [2024-11-26 15:31:24.904323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:26.492 [2024-11-26 15:31:24.906555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.492 [2024-11-26 15:31:24.906753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:26.492 [2024-11-26 15:31:24.906800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:26.492 [2024-11-26 15:31:24.907080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:26.492 [2024-11-26 15:31:24.907268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:26.492 [2024-11-26 15:31:24.907315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:26.492 [2024-11-26 15:31:24.907484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.492 "name": "raid_bdev1", 00:15:26.492 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:26.492 "strip_size_kb": 0, 00:15:26.492 "state": "online", 00:15:26.492 "raid_level": "raid1", 00:15:26.492 "superblock": true, 00:15:26.492 "num_base_bdevs": 2, 00:15:26.492 "num_base_bdevs_discovered": 2, 00:15:26.492 "num_base_bdevs_operational": 2, 00:15:26.492 "base_bdevs_list": [ 00:15:26.492 { 00:15:26.492 "name": "pt1", 00:15:26.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.492 "is_configured": true, 00:15:26.492 "data_offset": 256, 00:15:26.492 "data_size": 7936 00:15:26.492 }, 00:15:26.492 { 00:15:26.492 "name": "pt2", 00:15:26.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.492 "is_configured": true, 00:15:26.492 "data_offset": 256, 00:15:26.492 "data_size": 7936 00:15:26.492 } 00:15:26.492 ] 00:15:26.492 }' 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.492 15:31:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.063 [2024-11-26 15:31:25.344745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.063 "name": "raid_bdev1", 00:15:27.063 "aliases": [ 00:15:27.063 "9f063847-71c1-4b99-bfc7-5c56aff64252" 00:15:27.063 ], 00:15:27.063 "product_name": "Raid Volume", 00:15:27.063 "block_size": 4096, 00:15:27.063 "num_blocks": 7936, 00:15:27.063 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:27.063 "assigned_rate_limits": { 00:15:27.063 "rw_ios_per_sec": 0, 00:15:27.063 "rw_mbytes_per_sec": 0, 00:15:27.063 "r_mbytes_per_sec": 0, 00:15:27.063 "w_mbytes_per_sec": 0 00:15:27.063 }, 00:15:27.063 "claimed": false, 00:15:27.063 "zoned": false, 00:15:27.063 "supported_io_types": { 00:15:27.063 "read": true, 00:15:27.063 "write": true, 00:15:27.063 "unmap": false, 00:15:27.063 "flush": false, 00:15:27.063 "reset": true, 00:15:27.063 "nvme_admin": false, 00:15:27.063 "nvme_io": false, 00:15:27.063 "nvme_io_md": false, 00:15:27.063 "write_zeroes": true, 00:15:27.063 "zcopy": false, 00:15:27.063 "get_zone_info": false, 00:15:27.063 "zone_management": false, 00:15:27.063 "zone_append": false, 00:15:27.063 "compare": false, 00:15:27.063 "compare_and_write": false, 00:15:27.063 "abort": false, 00:15:27.063 "seek_hole": false, 00:15:27.063 "seek_data": false, 00:15:27.063 "copy": false, 00:15:27.063 "nvme_iov_md": false 00:15:27.063 }, 00:15:27.063 "memory_domains": [ 00:15:27.063 { 00:15:27.063 "dma_device_id": "system", 00:15:27.063 "dma_device_type": 1 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.063 "dma_device_type": 2 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "system", 00:15:27.063 "dma_device_type": 1 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.063 "dma_device_type": 2 00:15:27.063 } 00:15:27.063 ], 00:15:27.063 "driver_specific": { 00:15:27.063 "raid": { 00:15:27.063 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:27.063 "strip_size_kb": 0, 00:15:27.063 "state": "online", 00:15:27.063 "raid_level": "raid1", 00:15:27.063 "superblock": true, 00:15:27.063 "num_base_bdevs": 2, 00:15:27.063 "num_base_bdevs_discovered": 2, 00:15:27.063 "num_base_bdevs_operational": 2, 00:15:27.063 "base_bdevs_list": [ 00:15:27.063 { 00:15:27.063 "name": "pt1", 00:15:27.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.063 "is_configured": true, 00:15:27.063 "data_offset": 256, 00:15:27.063 "data_size": 7936 00:15:27.063 }, 00:15:27.063 { 00:15:27.063 "name": "pt2", 00:15:27.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.063 "is_configured": true, 00:15:27.063 "data_offset": 256, 00:15:27.063 "data_size": 7936 00:15:27.063 } 00:15:27.063 ] 00:15:27.063 } 00:15:27.063 } 00:15:27.063 }' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:27.063 pt2' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.063 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.324 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.324 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:27.324 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 [2024-11-26 15:31:25.584695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9f063847-71c1-4b99-bfc7-5c56aff64252 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 9f063847-71c1-4b99-bfc7-5c56aff64252 ']' 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 [2024-11-26 15:31:25.628461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.325 [2024-11-26 15:31:25.628526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.325 [2024-11-26 15:31:25.628665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.325 [2024-11-26 15:31:25.628756] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.325 [2024-11-26 15:31:25.628814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 [2024-11-26 15:31:25.768517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:27.325 [2024-11-26 15:31:25.770666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:27.325 [2024-11-26 15:31:25.770758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:27.325 [2024-11-26 15:31:25.770855] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:27.325 [2024-11-26 15:31:25.770895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.325 [2024-11-26 15:31:25.770915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:15:27.325 request: 00:15:27.325 { 00:15:27.325 "name": "raid_bdev1", 00:15:27.325 "raid_level": "raid1", 00:15:27.325 "base_bdevs": [ 00:15:27.325 "malloc1", 00:15:27.325 "malloc2" 00:15:27.325 ], 00:15:27.325 "superblock": false, 00:15:27.325 "method": "bdev_raid_create", 00:15:27.325 "req_id": 1 00:15:27.325 } 00:15:27.325 Got JSON-RPC error response 00:15:27.325 response: 00:15:27.325 { 00:15:27.325 "code": -17, 00:15:27.325 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:27.325 } 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.325 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.586 [2024-11-26 15:31:25.832518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.586 [2024-11-26 15:31:25.832623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.586 [2024-11-26 15:31:25.832653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:27.586 [2024-11-26 15:31:25.832683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.586 [2024-11-26 15:31:25.835046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.586 [2024-11-26 15:31:25.835131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.586 [2024-11-26 15:31:25.835218] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:27.586 [2024-11-26 15:31:25.835286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.586 pt1 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.586 "name": "raid_bdev1", 00:15:27.586 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:27.586 "strip_size_kb": 0, 00:15:27.586 "state": "configuring", 00:15:27.586 "raid_level": "raid1", 00:15:27.586 "superblock": true, 00:15:27.586 "num_base_bdevs": 2, 00:15:27.586 "num_base_bdevs_discovered": 1, 00:15:27.586 "num_base_bdevs_operational": 2, 00:15:27.586 "base_bdevs_list": [ 00:15:27.586 { 00:15:27.586 "name": "pt1", 00:15:27.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.586 "is_configured": true, 00:15:27.586 "data_offset": 256, 00:15:27.586 "data_size": 7936 00:15:27.586 }, 00:15:27.586 { 00:15:27.586 "name": null, 00:15:27.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.586 "is_configured": false, 00:15:27.586 "data_offset": 256, 00:15:27.586 "data_size": 7936 00:15:27.586 } 00:15:27.586 ] 00:15:27.586 }' 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.586 15:31:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.847 [2024-11-26 15:31:26.284657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.847 [2024-11-26 15:31:26.284751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.847 [2024-11-26 15:31:26.284786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:27.847 [2024-11-26 15:31:26.284815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.847 [2024-11-26 15:31:26.285187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.847 [2024-11-26 15:31:26.285259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.847 [2024-11-26 15:31:26.285334] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.847 [2024-11-26 15:31:26.285383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.847 [2024-11-26 15:31:26.285487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:27.847 [2024-11-26 15:31:26.285527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:27.847 [2024-11-26 15:31:26.285790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:27.847 [2024-11-26 15:31:26.285959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:27.847 [2024-11-26 15:31:26.286000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:27.847 [2024-11-26 15:31:26.286127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.847 pt2 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.847 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.108 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.108 "name": "raid_bdev1", 00:15:28.108 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:28.108 "strip_size_kb": 0, 00:15:28.108 "state": "online", 00:15:28.108 "raid_level": "raid1", 00:15:28.108 "superblock": true, 00:15:28.108 "num_base_bdevs": 2, 00:15:28.108 "num_base_bdevs_discovered": 2, 00:15:28.108 "num_base_bdevs_operational": 2, 00:15:28.108 "base_bdevs_list": [ 00:15:28.108 { 00:15:28.108 "name": "pt1", 00:15:28.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.108 "is_configured": true, 00:15:28.108 "data_offset": 256, 00:15:28.108 "data_size": 7936 00:15:28.108 }, 00:15:28.108 { 00:15:28.108 "name": "pt2", 00:15:28.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.108 "is_configured": true, 00:15:28.108 "data_offset": 256, 00:15:28.108 "data_size": 7936 00:15:28.108 } 00:15:28.108 ] 00:15:28.108 }' 00:15:28.108 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.108 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.368 [2024-11-26 15:31:26.736987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.368 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.368 "name": "raid_bdev1", 00:15:28.368 "aliases": [ 00:15:28.368 "9f063847-71c1-4b99-bfc7-5c56aff64252" 00:15:28.368 ], 00:15:28.368 "product_name": "Raid Volume", 00:15:28.368 "block_size": 4096, 00:15:28.368 "num_blocks": 7936, 00:15:28.368 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:28.368 "assigned_rate_limits": { 00:15:28.368 "rw_ios_per_sec": 0, 00:15:28.368 "rw_mbytes_per_sec": 0, 00:15:28.368 "r_mbytes_per_sec": 0, 00:15:28.368 "w_mbytes_per_sec": 0 00:15:28.368 }, 00:15:28.368 "claimed": false, 00:15:28.368 "zoned": false, 00:15:28.368 "supported_io_types": { 00:15:28.368 "read": true, 00:15:28.368 "write": true, 00:15:28.368 "unmap": false, 00:15:28.368 "flush": false, 00:15:28.368 "reset": true, 00:15:28.368 "nvme_admin": false, 00:15:28.368 "nvme_io": false, 00:15:28.368 "nvme_io_md": false, 00:15:28.368 "write_zeroes": true, 00:15:28.368 "zcopy": false, 00:15:28.368 "get_zone_info": false, 00:15:28.368 "zone_management": false, 00:15:28.368 "zone_append": false, 00:15:28.368 "compare": false, 00:15:28.368 "compare_and_write": false, 00:15:28.368 "abort": false, 00:15:28.368 "seek_hole": false, 00:15:28.368 "seek_data": false, 00:15:28.368 "copy": false, 00:15:28.368 "nvme_iov_md": false 00:15:28.368 }, 00:15:28.368 "memory_domains": [ 00:15:28.368 { 00:15:28.368 "dma_device_id": "system", 00:15:28.368 "dma_device_type": 1 00:15:28.368 }, 00:15:28.368 { 00:15:28.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.368 "dma_device_type": 2 00:15:28.368 }, 00:15:28.368 { 00:15:28.368 "dma_device_id": "system", 00:15:28.368 "dma_device_type": 1 00:15:28.368 }, 00:15:28.368 { 00:15:28.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.368 "dma_device_type": 2 00:15:28.368 } 00:15:28.368 ], 00:15:28.368 "driver_specific": { 00:15:28.368 "raid": { 00:15:28.368 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:28.368 "strip_size_kb": 0, 00:15:28.369 "state": "online", 00:15:28.369 "raid_level": "raid1", 00:15:28.369 "superblock": true, 00:15:28.369 "num_base_bdevs": 2, 00:15:28.369 "num_base_bdevs_discovered": 2, 00:15:28.369 "num_base_bdevs_operational": 2, 00:15:28.369 "base_bdevs_list": [ 00:15:28.369 { 00:15:28.369 "name": "pt1", 00:15:28.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.369 "is_configured": true, 00:15:28.369 "data_offset": 256, 00:15:28.369 "data_size": 7936 00:15:28.369 }, 00:15:28.369 { 00:15:28.369 "name": "pt2", 00:15:28.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.369 "is_configured": true, 00:15:28.369 "data_offset": 256, 00:15:28.369 "data_size": 7936 00:15:28.369 } 00:15:28.369 ] 00:15:28.369 } 00:15:28.369 } 00:15:28.369 }' 00:15:28.369 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.369 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:28.369 pt2' 00:15:28.369 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.658 [2024-11-26 15:31:26.977036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.658 15:31:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 9f063847-71c1-4b99-bfc7-5c56aff64252 '!=' 9f063847-71c1-4b99-bfc7-5c56aff64252 ']' 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.658 [2024-11-26 15:31:27.024836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.658 "name": "raid_bdev1", 00:15:28.658 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:28.658 "strip_size_kb": 0, 00:15:28.658 "state": "online", 00:15:28.658 "raid_level": "raid1", 00:15:28.658 "superblock": true, 00:15:28.658 "num_base_bdevs": 2, 00:15:28.658 "num_base_bdevs_discovered": 1, 00:15:28.658 "num_base_bdevs_operational": 1, 00:15:28.658 "base_bdevs_list": [ 00:15:28.658 { 00:15:28.658 "name": null, 00:15:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.658 "is_configured": false, 00:15:28.658 "data_offset": 0, 00:15:28.658 "data_size": 7936 00:15:28.658 }, 00:15:28.658 { 00:15:28.658 "name": "pt2", 00:15:28.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.658 "is_configured": true, 00:15:28.658 "data_offset": 256, 00:15:28.658 "data_size": 7936 00:15:28.658 } 00:15:28.658 ] 00:15:28.658 }' 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.658 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.228 [2024-11-26 15:31:27.440932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.228 [2024-11-26 15:31:27.440995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.228 [2024-11-26 15:31:27.441070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.228 [2024-11-26 15:31:27.441141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.228 [2024-11-26 15:31:27.441175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.228 [2024-11-26 15:31:27.492945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.228 [2024-11-26 15:31:27.493028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.228 [2024-11-26 15:31:27.493056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:29.228 [2024-11-26 15:31:27.493105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.228 [2024-11-26 15:31:27.495507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.228 [2024-11-26 15:31:27.495576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.228 [2024-11-26 15:31:27.495654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:29.228 [2024-11-26 15:31:27.495719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.228 [2024-11-26 15:31:27.495808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:29.228 [2024-11-26 15:31:27.495852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:29.228 [2024-11-26 15:31:27.496071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:29.228 [2024-11-26 15:31:27.496241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:29.228 [2024-11-26 15:31:27.496282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:29.228 [2024-11-26 15:31:27.496418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.228 pt2 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.228 "name": "raid_bdev1", 00:15:29.228 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:29.228 "strip_size_kb": 0, 00:15:29.228 "state": "online", 00:15:29.228 "raid_level": "raid1", 00:15:29.228 "superblock": true, 00:15:29.228 "num_base_bdevs": 2, 00:15:29.228 "num_base_bdevs_discovered": 1, 00:15:29.228 "num_base_bdevs_operational": 1, 00:15:29.228 "base_bdevs_list": [ 00:15:29.228 { 00:15:29.228 "name": null, 00:15:29.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.228 "is_configured": false, 00:15:29.228 "data_offset": 256, 00:15:29.228 "data_size": 7936 00:15:29.228 }, 00:15:29.228 { 00:15:29.228 "name": "pt2", 00:15:29.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.228 "is_configured": true, 00:15:29.228 "data_offset": 256, 00:15:29.228 "data_size": 7936 00:15:29.228 } 00:15:29.228 ] 00:15:29.228 }' 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.228 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.799 [2024-11-26 15:31:27.981065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.799 [2024-11-26 15:31:27.981130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.799 [2024-11-26 15:31:27.981203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.799 [2024-11-26 15:31:27.981270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.799 [2024-11-26 15:31:27.981301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.799 15:31:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.799 [2024-11-26 15:31:28.033069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.799 [2024-11-26 15:31:28.033163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.799 [2024-11-26 15:31:28.033207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:29.799 [2024-11-26 15:31:28.033234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.799 [2024-11-26 15:31:28.035509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.799 [2024-11-26 15:31:28.035573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.799 [2024-11-26 15:31:28.035666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:29.799 [2024-11-26 15:31:28.035707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.799 [2024-11-26 15:31:28.035832] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:29.799 [2024-11-26 15:31:28.035891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.799 [2024-11-26 15:31:28.035924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:15:29.799 [2024-11-26 15:31:28.035991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.799 [2024-11-26 15:31:28.036098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:29.799 [2024-11-26 15:31:28.036138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:29.799 [2024-11-26 15:31:28.036374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:29.799 [2024-11-26 15:31:28.036527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:29.799 [2024-11-26 15:31:28.036582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:29.799 [2024-11-26 15:31:28.036716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.799 pt1 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.799 "name": "raid_bdev1", 00:15:29.799 "uuid": "9f063847-71c1-4b99-bfc7-5c56aff64252", 00:15:29.799 "strip_size_kb": 0, 00:15:29.799 "state": "online", 00:15:29.799 "raid_level": "raid1", 00:15:29.799 "superblock": true, 00:15:29.799 "num_base_bdevs": 2, 00:15:29.799 "num_base_bdevs_discovered": 1, 00:15:29.799 "num_base_bdevs_operational": 1, 00:15:29.799 "base_bdevs_list": [ 00:15:29.799 { 00:15:29.799 "name": null, 00:15:29.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.799 "is_configured": false, 00:15:29.799 "data_offset": 256, 00:15:29.799 "data_size": 7936 00:15:29.799 }, 00:15:29.799 { 00:15:29.799 "name": "pt2", 00:15:29.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.799 "is_configured": true, 00:15:29.799 "data_offset": 256, 00:15:29.799 "data_size": 7936 00:15:29.799 } 00:15:29.799 ] 00:15:29.799 }' 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.799 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.060 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.060 [2024-11-26 15:31:28.533413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.320 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.320 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 9f063847-71c1-4b99-bfc7-5c56aff64252 '!=' 9f063847-71c1-4b99-bfc7-5c56aff64252 ']' 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 98069 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 98069 ']' 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 98069 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98069 00:15:30.321 killing process with pid 98069 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98069' 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 98069 00:15:30.321 [2024-11-26 15:31:28.596904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.321 [2024-11-26 15:31:28.596965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.321 [2024-11-26 15:31:28.596998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.321 [2024-11-26 15:31:28.597010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:30.321 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 98069 00:15:30.321 [2024-11-26 15:31:28.639386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.581 15:31:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:30.581 00:15:30.581 real 0m5.054s 00:15:30.581 user 0m8.047s 00:15:30.581 sys 0m1.198s 00:15:30.581 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.581 15:31:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.581 ************************************ 00:15:30.581 END TEST raid_superblock_test_4k 00:15:30.581 ************************************ 00:15:30.581 15:31:29 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:30.581 15:31:29 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:30.581 15:31:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:30.581 15:31:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.581 15:31:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.581 ************************************ 00:15:30.581 START TEST raid_rebuild_test_sb_4k 00:15:30.581 ************************************ 00:15:30.581 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:30.581 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:30.581 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:30.581 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:30.581 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:30.581 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98386 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98386 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98386 ']' 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.843 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.843 [2024-11-26 15:31:29.160247] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:30.843 [2024-11-26 15:31:29.160451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98386 ] 00:15:30.843 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:30.843 Zero copy mechanism will not be used. 00:15:30.843 [2024-11-26 15:31:29.300762] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:31.104 [2024-11-26 15:31:29.339883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.104 [2024-11-26 15:31:29.382378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.104 [2024-11-26 15:31:29.459921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.104 [2024-11-26 15:31:29.460141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 BaseBdev1_malloc 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 [2024-11-26 15:31:29.999485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:31.675 [2024-11-26 15:31:29.999638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.675 [2024-11-26 15:31:29.999687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.675 [2024-11-26 15:31:29.999739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.675 [2024-11-26 15:31:30.002193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.675 [2024-11-26 15:31:30.002292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:31.675 BaseBdev1 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 BaseBdev2_malloc 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 [2024-11-26 15:31:30.034066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:31.675 [2024-11-26 15:31:30.034169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.675 [2024-11-26 15:31:30.034217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:31.675 [2024-11-26 15:31:30.034263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.675 [2024-11-26 15:31:30.036574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.675 [2024-11-26 15:31:30.036645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:31.675 BaseBdev2 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 spare_malloc 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 spare_delay 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 [2024-11-26 15:31:30.080510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.675 [2024-11-26 15:31:30.080636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.675 [2024-11-26 15:31:30.080661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:31.675 [2024-11-26 15:31:30.080674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.675 [2024-11-26 15:31:30.083220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.675 [2024-11-26 15:31:30.083253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.675 spare 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 [2024-11-26 15:31:30.092604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.675 [2024-11-26 15:31:30.094764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.675 [2024-11-26 15:31:30.094975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:31.675 [2024-11-26 15:31:30.095013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.675 [2024-11-26 15:31:30.095311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:31.675 [2024-11-26 15:31:30.095511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:31.675 [2024-11-26 15:31:30.095553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:31.675 [2024-11-26 15:31:30.095723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.675 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.935 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.936 "name": "raid_bdev1", 00:15:31.936 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:31.936 "strip_size_kb": 0, 00:15:31.936 "state": "online", 00:15:31.936 "raid_level": "raid1", 00:15:31.936 "superblock": true, 00:15:31.936 "num_base_bdevs": 2, 00:15:31.936 "num_base_bdevs_discovered": 2, 00:15:31.936 "num_base_bdevs_operational": 2, 00:15:31.936 "base_bdevs_list": [ 00:15:31.936 { 00:15:31.936 "name": "BaseBdev1", 00:15:31.936 "uuid": "46695f4b-b502-5f0a-9ee1-a7800d035a8d", 00:15:31.936 "is_configured": true, 00:15:31.936 "data_offset": 256, 00:15:31.936 "data_size": 7936 00:15:31.936 }, 00:15:31.936 { 00:15:31.936 "name": "BaseBdev2", 00:15:31.936 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:31.936 "is_configured": true, 00:15:31.936 "data_offset": 256, 00:15:31.936 "data_size": 7936 00:15:31.936 } 00:15:31.936 ] 00:15:31.936 }' 00:15:31.936 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.936 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:32.196 [2024-11-26 15:31:30.528923] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.196 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:32.456 [2024-11-26 15:31:30.792806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:32.456 /dev/nbd0 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.456 1+0 records in 00:15:32.456 1+0 records out 00:15:32.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550017 s, 7.4 MB/s 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:32.456 15:31:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:33.027 7936+0 records in 00:15:33.027 7936+0 records out 00:15:33.027 32505856 bytes (33 MB, 31 MiB) copied, 0.615289 s, 52.8 MB/s 00:15:33.027 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:33.027 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.027 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.027 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.027 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:33.027 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.027 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.287 [2024-11-26 15:31:31.703263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.287 [2024-11-26 15:31:31.715948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.287 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.547 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.547 "name": "raid_bdev1", 00:15:33.547 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:33.547 "strip_size_kb": 0, 00:15:33.547 "state": "online", 00:15:33.547 "raid_level": "raid1", 00:15:33.547 "superblock": true, 00:15:33.547 "num_base_bdevs": 2, 00:15:33.547 "num_base_bdevs_discovered": 1, 00:15:33.548 "num_base_bdevs_operational": 1, 00:15:33.548 "base_bdevs_list": [ 00:15:33.548 { 00:15:33.548 "name": null, 00:15:33.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.548 "is_configured": false, 00:15:33.548 "data_offset": 0, 00:15:33.548 "data_size": 7936 00:15:33.548 }, 00:15:33.548 { 00:15:33.548 "name": "BaseBdev2", 00:15:33.548 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:33.548 "is_configured": true, 00:15:33.548 "data_offset": 256, 00:15:33.548 "data_size": 7936 00:15:33.548 } 00:15:33.548 ] 00:15:33.548 }' 00:15:33.548 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.548 15:31:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.808 15:31:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.808 15:31:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.808 15:31:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.808 [2024-11-26 15:31:32.192001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.808 [2024-11-26 15:31:32.210333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:15:33.808 15:31:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.808 15:31:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:33.808 [2024-11-26 15:31:32.216899] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.747 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.747 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.747 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.747 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.747 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.005 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.005 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.005 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.005 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.005 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.005 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.005 "name": "raid_bdev1", 00:15:35.005 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:35.005 "strip_size_kb": 0, 00:15:35.005 "state": "online", 00:15:35.005 "raid_level": "raid1", 00:15:35.005 "superblock": true, 00:15:35.005 "num_base_bdevs": 2, 00:15:35.005 "num_base_bdevs_discovered": 2, 00:15:35.005 "num_base_bdevs_operational": 2, 00:15:35.005 "process": { 00:15:35.005 "type": "rebuild", 00:15:35.005 "target": "spare", 00:15:35.005 "progress": { 00:15:35.005 "blocks": 2560, 00:15:35.005 "percent": 32 00:15:35.005 } 00:15:35.005 }, 00:15:35.005 "base_bdevs_list": [ 00:15:35.005 { 00:15:35.005 "name": "spare", 00:15:35.005 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:35.005 "is_configured": true, 00:15:35.005 "data_offset": 256, 00:15:35.005 "data_size": 7936 00:15:35.005 }, 00:15:35.005 { 00:15:35.005 "name": "BaseBdev2", 00:15:35.005 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:35.005 "is_configured": true, 00:15:35.005 "data_offset": 256, 00:15:35.005 "data_size": 7936 00:15:35.005 } 00:15:35.005 ] 00:15:35.005 }' 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 [2024-11-26 15:31:33.351159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.006 [2024-11-26 15:31:33.428208] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.006 [2024-11-26 15:31:33.428330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.006 [2024-11-26 15:31:33.428364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.006 [2024-11-26 15:31:33.428388] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.006 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.265 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.265 "name": "raid_bdev1", 00:15:35.265 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:35.265 "strip_size_kb": 0, 00:15:35.265 "state": "online", 00:15:35.265 "raid_level": "raid1", 00:15:35.265 "superblock": true, 00:15:35.265 "num_base_bdevs": 2, 00:15:35.265 "num_base_bdevs_discovered": 1, 00:15:35.265 "num_base_bdevs_operational": 1, 00:15:35.265 "base_bdevs_list": [ 00:15:35.265 { 00:15:35.265 "name": null, 00:15:35.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.265 "is_configured": false, 00:15:35.265 "data_offset": 0, 00:15:35.265 "data_size": 7936 00:15:35.265 }, 00:15:35.265 { 00:15:35.265 "name": "BaseBdev2", 00:15:35.265 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:35.265 "is_configured": true, 00:15:35.265 "data_offset": 256, 00:15:35.265 "data_size": 7936 00:15:35.265 } 00:15:35.265 ] 00:15:35.265 }' 00:15:35.265 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.265 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.523 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.523 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.523 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.523 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.524 "name": "raid_bdev1", 00:15:35.524 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:35.524 "strip_size_kb": 0, 00:15:35.524 "state": "online", 00:15:35.524 "raid_level": "raid1", 00:15:35.524 "superblock": true, 00:15:35.524 "num_base_bdevs": 2, 00:15:35.524 "num_base_bdevs_discovered": 1, 00:15:35.524 "num_base_bdevs_operational": 1, 00:15:35.524 "base_bdevs_list": [ 00:15:35.524 { 00:15:35.524 "name": null, 00:15:35.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.524 "is_configured": false, 00:15:35.524 "data_offset": 0, 00:15:35.524 "data_size": 7936 00:15:35.524 }, 00:15:35.524 { 00:15:35.524 "name": "BaseBdev2", 00:15:35.524 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:35.524 "is_configured": true, 00:15:35.524 "data_offset": 256, 00:15:35.524 "data_size": 7936 00:15:35.524 } 00:15:35.524 ] 00:15:35.524 }' 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.524 15:31:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.783 15:31:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.783 15:31:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.783 15:31:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.783 15:31:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.783 [2024-11-26 15:31:34.020001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.783 [2024-11-26 15:31:34.027058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:15:35.783 15:31:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.783 15:31:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:35.783 [2024-11-26 15:31:34.029305] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.723 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.724 "name": "raid_bdev1", 00:15:36.724 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:36.724 "strip_size_kb": 0, 00:15:36.724 "state": "online", 00:15:36.724 "raid_level": "raid1", 00:15:36.724 "superblock": true, 00:15:36.724 "num_base_bdevs": 2, 00:15:36.724 "num_base_bdevs_discovered": 2, 00:15:36.724 "num_base_bdevs_operational": 2, 00:15:36.724 "process": { 00:15:36.724 "type": "rebuild", 00:15:36.724 "target": "spare", 00:15:36.724 "progress": { 00:15:36.724 "blocks": 2560, 00:15:36.724 "percent": 32 00:15:36.724 } 00:15:36.724 }, 00:15:36.724 "base_bdevs_list": [ 00:15:36.724 { 00:15:36.724 "name": "spare", 00:15:36.724 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:36.724 "is_configured": true, 00:15:36.724 "data_offset": 256, 00:15:36.724 "data_size": 7936 00:15:36.724 }, 00:15:36.724 { 00:15:36.724 "name": "BaseBdev2", 00:15:36.724 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:36.724 "is_configured": true, 00:15:36.724 "data_offset": 256, 00:15:36.724 "data_size": 7936 00:15:36.724 } 00:15:36.724 ] 00:15:36.724 }' 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:36.724 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.724 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.984 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.984 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.984 "name": "raid_bdev1", 00:15:36.984 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:36.984 "strip_size_kb": 0, 00:15:36.984 "state": "online", 00:15:36.984 "raid_level": "raid1", 00:15:36.984 "superblock": true, 00:15:36.984 "num_base_bdevs": 2, 00:15:36.984 "num_base_bdevs_discovered": 2, 00:15:36.984 "num_base_bdevs_operational": 2, 00:15:36.984 "process": { 00:15:36.984 "type": "rebuild", 00:15:36.984 "target": "spare", 00:15:36.984 "progress": { 00:15:36.984 "blocks": 2816, 00:15:36.984 "percent": 35 00:15:36.984 } 00:15:36.984 }, 00:15:36.984 "base_bdevs_list": [ 00:15:36.984 { 00:15:36.984 "name": "spare", 00:15:36.984 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:36.984 "is_configured": true, 00:15:36.984 "data_offset": 256, 00:15:36.984 "data_size": 7936 00:15:36.984 }, 00:15:36.984 { 00:15:36.984 "name": "BaseBdev2", 00:15:36.984 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:36.984 "is_configured": true, 00:15:36.984 "data_offset": 256, 00:15:36.984 "data_size": 7936 00:15:36.984 } 00:15:36.984 ] 00:15:36.984 }' 00:15:36.984 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.984 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.984 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.984 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.984 15:31:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.924 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.924 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.924 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.925 "name": "raid_bdev1", 00:15:37.925 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:37.925 "strip_size_kb": 0, 00:15:37.925 "state": "online", 00:15:37.925 "raid_level": "raid1", 00:15:37.925 "superblock": true, 00:15:37.925 "num_base_bdevs": 2, 00:15:37.925 "num_base_bdevs_discovered": 2, 00:15:37.925 "num_base_bdevs_operational": 2, 00:15:37.925 "process": { 00:15:37.925 "type": "rebuild", 00:15:37.925 "target": "spare", 00:15:37.925 "progress": { 00:15:37.925 "blocks": 5888, 00:15:37.925 "percent": 74 00:15:37.925 } 00:15:37.925 }, 00:15:37.925 "base_bdevs_list": [ 00:15:37.925 { 00:15:37.925 "name": "spare", 00:15:37.925 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:37.925 "is_configured": true, 00:15:37.925 "data_offset": 256, 00:15:37.925 "data_size": 7936 00:15:37.925 }, 00:15:37.925 { 00:15:37.925 "name": "BaseBdev2", 00:15:37.925 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:37.925 "is_configured": true, 00:15:37.925 "data_offset": 256, 00:15:37.925 "data_size": 7936 00:15:37.925 } 00:15:37.925 ] 00:15:37.925 }' 00:15:37.925 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.194 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.194 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.194 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.194 15:31:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.769 [2024-11-26 15:31:37.154214] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:38.769 [2024-11-26 15:31:37.154367] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:38.769 [2024-11-26 15:31:37.154513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.029 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.289 "name": "raid_bdev1", 00:15:39.289 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:39.289 "strip_size_kb": 0, 00:15:39.289 "state": "online", 00:15:39.289 "raid_level": "raid1", 00:15:39.289 "superblock": true, 00:15:39.289 "num_base_bdevs": 2, 00:15:39.289 "num_base_bdevs_discovered": 2, 00:15:39.289 "num_base_bdevs_operational": 2, 00:15:39.289 "base_bdevs_list": [ 00:15:39.289 { 00:15:39.289 "name": "spare", 00:15:39.289 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:39.289 "is_configured": true, 00:15:39.289 "data_offset": 256, 00:15:39.289 "data_size": 7936 00:15:39.289 }, 00:15:39.289 { 00:15:39.289 "name": "BaseBdev2", 00:15:39.289 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:39.289 "is_configured": true, 00:15:39.289 "data_offset": 256, 00:15:39.289 "data_size": 7936 00:15:39.289 } 00:15:39.289 ] 00:15:39.289 }' 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.289 "name": "raid_bdev1", 00:15:39.289 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:39.289 "strip_size_kb": 0, 00:15:39.289 "state": "online", 00:15:39.289 "raid_level": "raid1", 00:15:39.289 "superblock": true, 00:15:39.289 "num_base_bdevs": 2, 00:15:39.289 "num_base_bdevs_discovered": 2, 00:15:39.289 "num_base_bdevs_operational": 2, 00:15:39.289 "base_bdevs_list": [ 00:15:39.289 { 00:15:39.289 "name": "spare", 00:15:39.289 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:39.289 "is_configured": true, 00:15:39.289 "data_offset": 256, 00:15:39.289 "data_size": 7936 00:15:39.289 }, 00:15:39.289 { 00:15:39.289 "name": "BaseBdev2", 00:15:39.289 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:39.289 "is_configured": true, 00:15:39.289 "data_offset": 256, 00:15:39.289 "data_size": 7936 00:15:39.289 } 00:15:39.289 ] 00:15:39.289 }' 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.289 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.556 "name": "raid_bdev1", 00:15:39.556 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:39.556 "strip_size_kb": 0, 00:15:39.556 "state": "online", 00:15:39.556 "raid_level": "raid1", 00:15:39.556 "superblock": true, 00:15:39.556 "num_base_bdevs": 2, 00:15:39.556 "num_base_bdevs_discovered": 2, 00:15:39.556 "num_base_bdevs_operational": 2, 00:15:39.556 "base_bdevs_list": [ 00:15:39.556 { 00:15:39.556 "name": "spare", 00:15:39.556 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:39.556 "is_configured": true, 00:15:39.556 "data_offset": 256, 00:15:39.556 "data_size": 7936 00:15:39.556 }, 00:15:39.556 { 00:15:39.556 "name": "BaseBdev2", 00:15:39.556 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:39.556 "is_configured": true, 00:15:39.556 "data_offset": 256, 00:15:39.556 "data_size": 7936 00:15:39.556 } 00:15:39.556 ] 00:15:39.556 }' 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.556 15:31:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.835 [2024-11-26 15:31:38.233681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.835 [2024-11-26 15:31:38.233718] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.835 [2024-11-26 15:31:38.233801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.835 [2024-11-26 15:31:38.233872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.835 [2024-11-26 15:31:38.233882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:39.835 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:40.110 /dev/nbd0 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.110 1+0 records in 00:15:40.110 1+0 records out 00:15:40.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051704 s, 7.9 MB/s 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.110 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:40.370 /dev/nbd1 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.371 1+0 records in 00:15:40.371 1+0 records out 00:15:40.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338681 s, 12.1 MB/s 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.371 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:40.631 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:40.631 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.631 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.631 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.631 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:40.631 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.631 15:31:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.631 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.891 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.892 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.892 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:40.892 [2024-11-26 15:31:39.321551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.892 [2024-11-26 15:31:39.321650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.892 [2024-11-26 15:31:39.321694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:40.892 [2024-11-26 15:31:39.321721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.892 [2024-11-26 15:31:39.324265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.892 [2024-11-26 15:31:39.324335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.892 [2024-11-26 15:31:39.324451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:40.892 [2024-11-26 15:31:39.324527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.892 [2024-11-26 15:31:39.324766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.892 spare 00:15:40.892 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.892 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:40.892 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.892 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.152 [2024-11-26 15:31:39.424887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:41.152 [2024-11-26 15:31:39.424956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:41.152 [2024-11-26 15:31:39.425273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:15:41.152 [2024-11-26 15:31:39.425481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:41.152 [2024-11-26 15:31:39.425525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:41.152 [2024-11-26 15:31:39.425711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.152 "name": "raid_bdev1", 00:15:41.152 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:41.152 "strip_size_kb": 0, 00:15:41.152 "state": "online", 00:15:41.152 "raid_level": "raid1", 00:15:41.152 "superblock": true, 00:15:41.152 "num_base_bdevs": 2, 00:15:41.152 "num_base_bdevs_discovered": 2, 00:15:41.152 "num_base_bdevs_operational": 2, 00:15:41.152 "base_bdevs_list": [ 00:15:41.152 { 00:15:41.152 "name": "spare", 00:15:41.152 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:41.152 "is_configured": true, 00:15:41.152 "data_offset": 256, 00:15:41.152 "data_size": 7936 00:15:41.152 }, 00:15:41.152 { 00:15:41.152 "name": "BaseBdev2", 00:15:41.152 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:41.152 "is_configured": true, 00:15:41.152 "data_offset": 256, 00:15:41.152 "data_size": 7936 00:15:41.152 } 00:15:41.152 ] 00:15:41.152 }' 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.152 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.721 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.721 "name": "raid_bdev1", 00:15:41.721 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:41.721 "strip_size_kb": 0, 00:15:41.721 "state": "online", 00:15:41.721 "raid_level": "raid1", 00:15:41.721 "superblock": true, 00:15:41.721 "num_base_bdevs": 2, 00:15:41.721 "num_base_bdevs_discovered": 2, 00:15:41.721 "num_base_bdevs_operational": 2, 00:15:41.721 "base_bdevs_list": [ 00:15:41.721 { 00:15:41.721 "name": "spare", 00:15:41.722 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:41.722 "is_configured": true, 00:15:41.722 "data_offset": 256, 00:15:41.722 "data_size": 7936 00:15:41.722 }, 00:15:41.722 { 00:15:41.722 "name": "BaseBdev2", 00:15:41.722 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:41.722 "is_configured": true, 00:15:41.722 "data_offset": 256, 00:15:41.722 "data_size": 7936 00:15:41.722 } 00:15:41.722 ] 00:15:41.722 }' 00:15:41.722 15:31:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.722 [2024-11-26 15:31:40.109865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.722 "name": "raid_bdev1", 00:15:41.722 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:41.722 "strip_size_kb": 0, 00:15:41.722 "state": "online", 00:15:41.722 "raid_level": "raid1", 00:15:41.722 "superblock": true, 00:15:41.722 "num_base_bdevs": 2, 00:15:41.722 "num_base_bdevs_discovered": 1, 00:15:41.722 "num_base_bdevs_operational": 1, 00:15:41.722 "base_bdevs_list": [ 00:15:41.722 { 00:15:41.722 "name": null, 00:15:41.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.722 "is_configured": false, 00:15:41.722 "data_offset": 0, 00:15:41.722 "data_size": 7936 00:15:41.722 }, 00:15:41.722 { 00:15:41.722 "name": "BaseBdev2", 00:15:41.722 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:41.722 "is_configured": true, 00:15:41.722 "data_offset": 256, 00:15:41.722 "data_size": 7936 00:15:41.722 } 00:15:41.722 ] 00:15:41.722 }' 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.722 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.291 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.291 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.291 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.291 [2024-11-26 15:31:40.550035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.291 [2024-11-26 15:31:40.550254] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:42.291 [2024-11-26 15:31:40.550321] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:42.291 [2024-11-26 15:31:40.550413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.291 [2024-11-26 15:31:40.558868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:15:42.291 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.291 15:31:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:42.291 [2024-11-26 15:31:40.561075] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.230 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.231 "name": "raid_bdev1", 00:15:43.231 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:43.231 "strip_size_kb": 0, 00:15:43.231 "state": "online", 00:15:43.231 "raid_level": "raid1", 00:15:43.231 "superblock": true, 00:15:43.231 "num_base_bdevs": 2, 00:15:43.231 "num_base_bdevs_discovered": 2, 00:15:43.231 "num_base_bdevs_operational": 2, 00:15:43.231 "process": { 00:15:43.231 "type": "rebuild", 00:15:43.231 "target": "spare", 00:15:43.231 "progress": { 00:15:43.231 "blocks": 2560, 00:15:43.231 "percent": 32 00:15:43.231 } 00:15:43.231 }, 00:15:43.231 "base_bdevs_list": [ 00:15:43.231 { 00:15:43.231 "name": "spare", 00:15:43.231 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:43.231 "is_configured": true, 00:15:43.231 "data_offset": 256, 00:15:43.231 "data_size": 7936 00:15:43.231 }, 00:15:43.231 { 00:15:43.231 "name": "BaseBdev2", 00:15:43.231 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:43.231 "is_configured": true, 00:15:43.231 "data_offset": 256, 00:15:43.231 "data_size": 7936 00:15:43.231 } 00:15:43.231 ] 00:15:43.231 }' 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.231 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.492 [2024-11-26 15:31:41.723763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.492 [2024-11-26 15:31:41.770590] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:43.492 [2024-11-26 15:31:41.770728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.492 [2024-11-26 15:31:41.770764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:43.492 [2024-11-26 15:31:41.770789] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.492 "name": "raid_bdev1", 00:15:43.492 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:43.492 "strip_size_kb": 0, 00:15:43.492 "state": "online", 00:15:43.492 "raid_level": "raid1", 00:15:43.492 "superblock": true, 00:15:43.492 "num_base_bdevs": 2, 00:15:43.492 "num_base_bdevs_discovered": 1, 00:15:43.492 "num_base_bdevs_operational": 1, 00:15:43.492 "base_bdevs_list": [ 00:15:43.492 { 00:15:43.492 "name": null, 00:15:43.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.492 "is_configured": false, 00:15:43.492 "data_offset": 0, 00:15:43.492 "data_size": 7936 00:15:43.492 }, 00:15:43.492 { 00:15:43.492 "name": "BaseBdev2", 00:15:43.492 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:43.492 "is_configured": true, 00:15:43.492 "data_offset": 256, 00:15:43.492 "data_size": 7936 00:15:43.492 } 00:15:43.492 ] 00:15:43.492 }' 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.492 15:31:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.063 15:31:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.063 15:31:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.063 15:31:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.063 [2024-11-26 15:31:42.253835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.063 [2024-11-26 15:31:42.253961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.063 [2024-11-26 15:31:42.253998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:44.063 [2024-11-26 15:31:42.254028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.063 [2024-11-26 15:31:42.254544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.063 [2024-11-26 15:31:42.254612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.063 [2024-11-26 15:31:42.254723] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:44.063 [2024-11-26 15:31:42.254787] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:44.063 [2024-11-26 15:31:42.254833] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:44.063 [2024-11-26 15:31:42.254883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.063 [2024-11-26 15:31:42.261615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:15:44.063 spare 00:15:44.063 15:31:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.063 15:31:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:44.063 [2024-11-26 15:31:42.263821] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.003 "name": "raid_bdev1", 00:15:45.003 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:45.003 "strip_size_kb": 0, 00:15:45.003 "state": "online", 00:15:45.003 "raid_level": "raid1", 00:15:45.003 "superblock": true, 00:15:45.003 "num_base_bdevs": 2, 00:15:45.003 "num_base_bdevs_discovered": 2, 00:15:45.003 "num_base_bdevs_operational": 2, 00:15:45.003 "process": { 00:15:45.003 "type": "rebuild", 00:15:45.003 "target": "spare", 00:15:45.003 "progress": { 00:15:45.003 "blocks": 2560, 00:15:45.003 "percent": 32 00:15:45.003 } 00:15:45.003 }, 00:15:45.003 "base_bdevs_list": [ 00:15:45.003 { 00:15:45.003 "name": "spare", 00:15:45.003 "uuid": "7415c305-c1c2-55ad-ba7b-b925d85f76ae", 00:15:45.003 "is_configured": true, 00:15:45.003 "data_offset": 256, 00:15:45.003 "data_size": 7936 00:15:45.003 }, 00:15:45.003 { 00:15:45.003 "name": "BaseBdev2", 00:15:45.003 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:45.003 "is_configured": true, 00:15:45.003 "data_offset": 256, 00:15:45.003 "data_size": 7936 00:15:45.003 } 00:15:45.003 ] 00:15:45.003 }' 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.003 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.003 [2024-11-26 15:31:43.428884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.003 [2024-11-26 15:31:43.473569] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.003 [2024-11-26 15:31:43.473702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.003 [2024-11-26 15:31:43.473744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.003 [2024-11-26 15:31:43.473781] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.263 "name": "raid_bdev1", 00:15:45.263 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:45.263 "strip_size_kb": 0, 00:15:45.263 "state": "online", 00:15:45.263 "raid_level": "raid1", 00:15:45.263 "superblock": true, 00:15:45.263 "num_base_bdevs": 2, 00:15:45.263 "num_base_bdevs_discovered": 1, 00:15:45.263 "num_base_bdevs_operational": 1, 00:15:45.263 "base_bdevs_list": [ 00:15:45.263 { 00:15:45.263 "name": null, 00:15:45.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.263 "is_configured": false, 00:15:45.263 "data_offset": 0, 00:15:45.263 "data_size": 7936 00:15:45.263 }, 00:15:45.263 { 00:15:45.263 "name": "BaseBdev2", 00:15:45.263 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:45.263 "is_configured": true, 00:15:45.263 "data_offset": 256, 00:15:45.263 "data_size": 7936 00:15:45.263 } 00:15:45.263 ] 00:15:45.263 }' 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.263 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.523 "name": "raid_bdev1", 00:15:45.523 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:45.523 "strip_size_kb": 0, 00:15:45.523 "state": "online", 00:15:45.523 "raid_level": "raid1", 00:15:45.523 "superblock": true, 00:15:45.523 "num_base_bdevs": 2, 00:15:45.523 "num_base_bdevs_discovered": 1, 00:15:45.523 "num_base_bdevs_operational": 1, 00:15:45.523 "base_bdevs_list": [ 00:15:45.523 { 00:15:45.523 "name": null, 00:15:45.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.523 "is_configured": false, 00:15:45.523 "data_offset": 0, 00:15:45.523 "data_size": 7936 00:15:45.523 }, 00:15:45.523 { 00:15:45.523 "name": "BaseBdev2", 00:15:45.523 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:45.523 "is_configured": true, 00:15:45.523 "data_offset": 256, 00:15:45.523 "data_size": 7936 00:15:45.523 } 00:15:45.523 ] 00:15:45.523 }' 00:15:45.523 15:31:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.783 [2024-11-26 15:31:44.077454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.783 [2024-11-26 15:31:44.077507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.783 [2024-11-26 15:31:44.077530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:45.783 [2024-11-26 15:31:44.077541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.783 [2024-11-26 15:31:44.078028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.783 [2024-11-26 15:31:44.078052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.783 [2024-11-26 15:31:44.078134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:45.783 [2024-11-26 15:31:44.078154] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:45.783 [2024-11-26 15:31:44.078170] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:45.783 [2024-11-26 15:31:44.078193] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:45.783 BaseBdev1 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.783 15:31:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.723 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.723 "name": "raid_bdev1", 00:15:46.723 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:46.723 "strip_size_kb": 0, 00:15:46.723 "state": "online", 00:15:46.723 "raid_level": "raid1", 00:15:46.723 "superblock": true, 00:15:46.723 "num_base_bdevs": 2, 00:15:46.723 "num_base_bdevs_discovered": 1, 00:15:46.724 "num_base_bdevs_operational": 1, 00:15:46.724 "base_bdevs_list": [ 00:15:46.724 { 00:15:46.724 "name": null, 00:15:46.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.724 "is_configured": false, 00:15:46.724 "data_offset": 0, 00:15:46.724 "data_size": 7936 00:15:46.724 }, 00:15:46.724 { 00:15:46.724 "name": "BaseBdev2", 00:15:46.724 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:46.724 "is_configured": true, 00:15:46.724 "data_offset": 256, 00:15:46.724 "data_size": 7936 00:15:46.724 } 00:15:46.724 ] 00:15:46.724 }' 00:15:46.724 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.724 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.294 "name": "raid_bdev1", 00:15:47.294 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:47.294 "strip_size_kb": 0, 00:15:47.294 "state": "online", 00:15:47.294 "raid_level": "raid1", 00:15:47.294 "superblock": true, 00:15:47.294 "num_base_bdevs": 2, 00:15:47.294 "num_base_bdevs_discovered": 1, 00:15:47.294 "num_base_bdevs_operational": 1, 00:15:47.294 "base_bdevs_list": [ 00:15:47.294 { 00:15:47.294 "name": null, 00:15:47.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.294 "is_configured": false, 00:15:47.294 "data_offset": 0, 00:15:47.294 "data_size": 7936 00:15:47.294 }, 00:15:47.294 { 00:15:47.294 "name": "BaseBdev2", 00:15:47.294 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:47.294 "is_configured": true, 00:15:47.294 "data_offset": 256, 00:15:47.294 "data_size": 7936 00:15:47.294 } 00:15:47.294 ] 00:15:47.294 }' 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.294 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.294 [2024-11-26 15:31:45.685883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.294 [2024-11-26 15:31:45.686039] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.294 [2024-11-26 15:31:45.686054] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:47.294 request: 00:15:47.294 { 00:15:47.294 "base_bdev": "BaseBdev1", 00:15:47.295 "raid_bdev": "raid_bdev1", 00:15:47.295 "method": "bdev_raid_add_base_bdev", 00:15:47.295 "req_id": 1 00:15:47.295 } 00:15:47.295 Got JSON-RPC error response 00:15:47.295 response: 00:15:47.295 { 00:15:47.295 "code": -22, 00:15:47.295 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:47.295 } 00:15:47.295 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:47.295 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:15:47.295 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.295 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.295 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.295 15:31:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.234 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.493 "name": "raid_bdev1", 00:15:48.493 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:48.493 "strip_size_kb": 0, 00:15:48.493 "state": "online", 00:15:48.493 "raid_level": "raid1", 00:15:48.493 "superblock": true, 00:15:48.493 "num_base_bdevs": 2, 00:15:48.493 "num_base_bdevs_discovered": 1, 00:15:48.493 "num_base_bdevs_operational": 1, 00:15:48.493 "base_bdevs_list": [ 00:15:48.493 { 00:15:48.493 "name": null, 00:15:48.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.493 "is_configured": false, 00:15:48.493 "data_offset": 0, 00:15:48.493 "data_size": 7936 00:15:48.493 }, 00:15:48.493 { 00:15:48.493 "name": "BaseBdev2", 00:15:48.493 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:48.493 "is_configured": true, 00:15:48.493 "data_offset": 256, 00:15:48.493 "data_size": 7936 00:15:48.493 } 00:15:48.493 ] 00:15:48.493 }' 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.493 15:31:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.753 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.753 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.753 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.753 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.753 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.753 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.754 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.754 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.754 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.754 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.014 "name": "raid_bdev1", 00:15:49.014 "uuid": "656b9ad6-e08f-4226-b694-2008440ca567", 00:15:49.014 "strip_size_kb": 0, 00:15:49.014 "state": "online", 00:15:49.014 "raid_level": "raid1", 00:15:49.014 "superblock": true, 00:15:49.014 "num_base_bdevs": 2, 00:15:49.014 "num_base_bdevs_discovered": 1, 00:15:49.014 "num_base_bdevs_operational": 1, 00:15:49.014 "base_bdevs_list": [ 00:15:49.014 { 00:15:49.014 "name": null, 00:15:49.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.014 "is_configured": false, 00:15:49.014 "data_offset": 0, 00:15:49.014 "data_size": 7936 00:15:49.014 }, 00:15:49.014 { 00:15:49.014 "name": "BaseBdev2", 00:15:49.014 "uuid": "1cfd4f1d-05b7-5387-a404-b3b74b6c392f", 00:15:49.014 "is_configured": true, 00:15:49.014 "data_offset": 256, 00:15:49.014 "data_size": 7936 00:15:49.014 } 00:15:49.014 ] 00:15:49.014 }' 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98386 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98386 ']' 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98386 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.014 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98386 00:15:49.014 killing process with pid 98386 00:15:49.014 Received shutdown signal, test time was about 60.000000 seconds 00:15:49.014 00:15:49.014 Latency(us) 00:15:49.014 [2024-11-26T15:31:47.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.014 [2024-11-26T15:31:47.493Z] =================================================================================================================== 00:15:49.014 [2024-11-26T15:31:47.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:49.015 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.015 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.015 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98386' 00:15:49.015 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98386 00:15:49.015 [2024-11-26 15:31:47.354248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.015 [2024-11-26 15:31:47.354364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.015 [2024-11-26 15:31:47.354405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.015 [2024-11-26 15:31:47.354417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:49.015 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98386 00:15:49.015 [2024-11-26 15:31:47.411029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.275 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:49.275 00:15:49.276 real 0m18.671s 00:15:49.276 user 0m24.589s 00:15:49.276 sys 0m2.872s 00:15:49.276 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.276 15:31:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.276 ************************************ 00:15:49.276 END TEST raid_rebuild_test_sb_4k 00:15:49.276 ************************************ 00:15:49.536 15:31:47 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:49.536 15:31:47 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:49.536 15:31:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:49.536 15:31:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.536 15:31:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.536 ************************************ 00:15:49.536 START TEST raid_state_function_test_sb_md_separate 00:15:49.536 ************************************ 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=99065 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:49.536 Process raid pid: 99065 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99065' 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 99065 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99065 ']' 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.536 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.537 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.537 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.537 15:31:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.537 [2024-11-26 15:31:47.899380] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:49.537 [2024-11-26 15:31:47.899509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.796 [2024-11-26 15:31:48.035928] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.796 [2024-11-26 15:31:48.073664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.796 [2024-11-26 15:31:48.115143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.796 [2024-11-26 15:31:48.192861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.797 [2024-11-26 15:31:48.192894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.366 [2024-11-26 15:31:48.713900] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.366 [2024-11-26 15:31:48.713951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.366 [2024-11-26 15:31:48.713971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.366 [2024-11-26 15:31:48.713979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.366 "name": "Existed_Raid", 00:15:50.366 "uuid": "6543804c-07e1-4839-ae5b-23d97d31d3de", 00:15:50.366 "strip_size_kb": 0, 00:15:50.366 "state": "configuring", 00:15:50.366 "raid_level": "raid1", 00:15:50.366 "superblock": true, 00:15:50.366 "num_base_bdevs": 2, 00:15:50.366 "num_base_bdevs_discovered": 0, 00:15:50.366 "num_base_bdevs_operational": 2, 00:15:50.366 "base_bdevs_list": [ 00:15:50.366 { 00:15:50.366 "name": "BaseBdev1", 00:15:50.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.366 "is_configured": false, 00:15:50.366 "data_offset": 0, 00:15:50.366 "data_size": 0 00:15:50.366 }, 00:15:50.366 { 00:15:50.366 "name": "BaseBdev2", 00:15:50.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.366 "is_configured": false, 00:15:50.366 "data_offset": 0, 00:15:50.366 "data_size": 0 00:15:50.366 } 00:15:50.366 ] 00:15:50.366 }' 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.366 15:31:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.941 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:50.941 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.941 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.941 [2024-11-26 15:31:49.137889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.941 [2024-11-26 15:31:49.137937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:50.941 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.941 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.941 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.942 [2024-11-26 15:31:49.149925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.942 [2024-11-26 15:31:49.149959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.942 [2024-11-26 15:31:49.149970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.942 [2024-11-26 15:31:49.149976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.942 [2024-11-26 15:31:49.178067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.942 BaseBdev1 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.942 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.943 [ 00:15:50.943 { 00:15:50.943 "name": "BaseBdev1", 00:15:50.943 "aliases": [ 00:15:50.943 "807515e8-5039-4449-905a-187cd152cd90" 00:15:50.943 ], 00:15:50.943 "product_name": "Malloc disk", 00:15:50.943 "block_size": 4096, 00:15:50.943 "num_blocks": 8192, 00:15:50.943 "uuid": "807515e8-5039-4449-905a-187cd152cd90", 00:15:50.943 "md_size": 32, 00:15:50.943 "md_interleave": false, 00:15:50.943 "dif_type": 0, 00:15:50.943 "assigned_rate_limits": { 00:15:50.943 "rw_ios_per_sec": 0, 00:15:50.943 "rw_mbytes_per_sec": 0, 00:15:50.943 "r_mbytes_per_sec": 0, 00:15:50.943 "w_mbytes_per_sec": 0 00:15:50.943 }, 00:15:50.943 "claimed": true, 00:15:50.943 "claim_type": "exclusive_write", 00:15:50.943 "zoned": false, 00:15:50.943 "supported_io_types": { 00:15:50.943 "read": true, 00:15:50.943 "write": true, 00:15:50.943 "unmap": true, 00:15:50.943 "flush": true, 00:15:50.943 "reset": true, 00:15:50.943 "nvme_admin": false, 00:15:50.943 "nvme_io": false, 00:15:50.943 "nvme_io_md": false, 00:15:50.943 "write_zeroes": true, 00:15:50.943 "zcopy": true, 00:15:50.943 "get_zone_info": false, 00:15:50.943 "zone_management": false, 00:15:50.943 "zone_append": false, 00:15:50.943 "compare": false, 00:15:50.943 "compare_and_write": false, 00:15:50.943 "abort": true, 00:15:50.943 "seek_hole": false, 00:15:50.943 "seek_data": false, 00:15:50.943 "copy": true, 00:15:50.943 "nvme_iov_md": false 00:15:50.943 }, 00:15:50.943 "memory_domains": [ 00:15:50.943 { 00:15:50.943 "dma_device_id": "system", 00:15:50.943 "dma_device_type": 1 00:15:50.943 }, 00:15:50.943 { 00:15:50.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.943 "dma_device_type": 2 00:15:50.943 } 00:15:50.943 ], 00:15:50.943 "driver_specific": {} 00:15:50.943 } 00:15:50.943 ] 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.943 "name": "Existed_Raid", 00:15:50.943 "uuid": "43f09db9-fad0-4054-bb32-4dde8b0e3b17", 00:15:50.943 "strip_size_kb": 0, 00:15:50.943 "state": "configuring", 00:15:50.943 "raid_level": "raid1", 00:15:50.943 "superblock": true, 00:15:50.943 "num_base_bdevs": 2, 00:15:50.943 "num_base_bdevs_discovered": 1, 00:15:50.943 "num_base_bdevs_operational": 2, 00:15:50.943 "base_bdevs_list": [ 00:15:50.943 { 00:15:50.943 "name": "BaseBdev1", 00:15:50.943 "uuid": "807515e8-5039-4449-905a-187cd152cd90", 00:15:50.943 "is_configured": true, 00:15:50.943 "data_offset": 256, 00:15:50.943 "data_size": 7936 00:15:50.943 }, 00:15:50.943 { 00:15:50.943 "name": "BaseBdev2", 00:15:50.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.943 "is_configured": false, 00:15:50.943 "data_offset": 0, 00:15:50.943 "data_size": 0 00:15:50.943 } 00:15:50.943 ] 00:15:50.943 }' 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.943 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.201 [2024-11-26 15:31:49.658236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.201 [2024-11-26 15:31:49.658281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.201 [2024-11-26 15:31:49.666296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.201 [2024-11-26 15:31:49.668418] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.201 [2024-11-26 15:31:49.668451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.201 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.460 "name": "Existed_Raid", 00:15:51.460 "uuid": "e7467e76-c1eb-4426-ab50-e1b1800d8917", 00:15:51.460 "strip_size_kb": 0, 00:15:51.460 "state": "configuring", 00:15:51.460 "raid_level": "raid1", 00:15:51.460 "superblock": true, 00:15:51.460 "num_base_bdevs": 2, 00:15:51.460 "num_base_bdevs_discovered": 1, 00:15:51.460 "num_base_bdevs_operational": 2, 00:15:51.460 "base_bdevs_list": [ 00:15:51.460 { 00:15:51.460 "name": "BaseBdev1", 00:15:51.460 "uuid": "807515e8-5039-4449-905a-187cd152cd90", 00:15:51.460 "is_configured": true, 00:15:51.460 "data_offset": 256, 00:15:51.460 "data_size": 7936 00:15:51.460 }, 00:15:51.460 { 00:15:51.460 "name": "BaseBdev2", 00:15:51.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.460 "is_configured": false, 00:15:51.460 "data_offset": 0, 00:15:51.460 "data_size": 0 00:15:51.460 } 00:15:51.460 ] 00:15:51.460 }' 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.460 15:31:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.718 [2024-11-26 15:31:50.148448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.718 [2024-11-26 15:31:50.148641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:51.718 [2024-11-26 15:31:50.148659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:51.718 [2024-11-26 15:31:50.148777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:51.718 [2024-11-26 15:31:50.148915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:51.718 [2024-11-26 15:31:50.148932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:51.718 [2024-11-26 15:31:50.149011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.718 BaseBdev2 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.718 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.718 [ 00:15:51.718 { 00:15:51.718 "name": "BaseBdev2", 00:15:51.718 "aliases": [ 00:15:51.718 "85a70d93-77ec-4131-adc1-50432a1ce0d6" 00:15:51.718 ], 00:15:51.718 "product_name": "Malloc disk", 00:15:51.718 "block_size": 4096, 00:15:51.718 "num_blocks": 8192, 00:15:51.718 "uuid": "85a70d93-77ec-4131-adc1-50432a1ce0d6", 00:15:51.718 "md_size": 32, 00:15:51.718 "md_interleave": false, 00:15:51.718 "dif_type": 0, 00:15:51.718 "assigned_rate_limits": { 00:15:51.718 "rw_ios_per_sec": 0, 00:15:51.718 "rw_mbytes_per_sec": 0, 00:15:51.718 "r_mbytes_per_sec": 0, 00:15:51.718 "w_mbytes_per_sec": 0 00:15:51.718 }, 00:15:51.718 "claimed": true, 00:15:51.718 "claim_type": "exclusive_write", 00:15:51.718 "zoned": false, 00:15:51.719 "supported_io_types": { 00:15:51.719 "read": true, 00:15:51.719 "write": true, 00:15:51.719 "unmap": true, 00:15:51.719 "flush": true, 00:15:51.719 "reset": true, 00:15:51.719 "nvme_admin": false, 00:15:51.719 "nvme_io": false, 00:15:51.719 "nvme_io_md": false, 00:15:51.719 "write_zeroes": true, 00:15:51.719 "zcopy": true, 00:15:51.719 "get_zone_info": false, 00:15:51.719 "zone_management": false, 00:15:51.719 "zone_append": false, 00:15:51.719 "compare": false, 00:15:51.719 "compare_and_write": false, 00:15:51.719 "abort": true, 00:15:51.719 "seek_hole": false, 00:15:51.719 "seek_data": false, 00:15:51.719 "copy": true, 00:15:51.719 "nvme_iov_md": false 00:15:51.719 }, 00:15:51.719 "memory_domains": [ 00:15:51.719 { 00:15:51.719 "dma_device_id": "system", 00:15:51.719 "dma_device_type": 1 00:15:51.719 }, 00:15:51.719 { 00:15:51.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.719 "dma_device_type": 2 00:15:51.719 } 00:15:51.719 ], 00:15:51.719 "driver_specific": {} 00:15:51.719 } 00:15:51.719 ] 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.719 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.976 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.976 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.976 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.976 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.976 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.976 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.976 "name": "Existed_Raid", 00:15:51.977 "uuid": "e7467e76-c1eb-4426-ab50-e1b1800d8917", 00:15:51.977 "strip_size_kb": 0, 00:15:51.977 "state": "online", 00:15:51.977 "raid_level": "raid1", 00:15:51.977 "superblock": true, 00:15:51.977 "num_base_bdevs": 2, 00:15:51.977 "num_base_bdevs_discovered": 2, 00:15:51.977 "num_base_bdevs_operational": 2, 00:15:51.977 "base_bdevs_list": [ 00:15:51.977 { 00:15:51.977 "name": "BaseBdev1", 00:15:51.977 "uuid": "807515e8-5039-4449-905a-187cd152cd90", 00:15:51.977 "is_configured": true, 00:15:51.977 "data_offset": 256, 00:15:51.977 "data_size": 7936 00:15:51.977 }, 00:15:51.977 { 00:15:51.977 "name": "BaseBdev2", 00:15:51.977 "uuid": "85a70d93-77ec-4131-adc1-50432a1ce0d6", 00:15:51.977 "is_configured": true, 00:15:51.977 "data_offset": 256, 00:15:51.977 "data_size": 7936 00:15:51.977 } 00:15:51.977 ] 00:15:51.977 }' 00:15:51.977 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.977 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.235 [2024-11-26 15:31:50.648899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.235 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.235 "name": "Existed_Raid", 00:15:52.235 "aliases": [ 00:15:52.235 "e7467e76-c1eb-4426-ab50-e1b1800d8917" 00:15:52.235 ], 00:15:52.235 "product_name": "Raid Volume", 00:15:52.235 "block_size": 4096, 00:15:52.235 "num_blocks": 7936, 00:15:52.235 "uuid": "e7467e76-c1eb-4426-ab50-e1b1800d8917", 00:15:52.235 "md_size": 32, 00:15:52.235 "md_interleave": false, 00:15:52.235 "dif_type": 0, 00:15:52.235 "assigned_rate_limits": { 00:15:52.235 "rw_ios_per_sec": 0, 00:15:52.235 "rw_mbytes_per_sec": 0, 00:15:52.235 "r_mbytes_per_sec": 0, 00:15:52.235 "w_mbytes_per_sec": 0 00:15:52.235 }, 00:15:52.235 "claimed": false, 00:15:52.235 "zoned": false, 00:15:52.235 "supported_io_types": { 00:15:52.235 "read": true, 00:15:52.235 "write": true, 00:15:52.236 "unmap": false, 00:15:52.236 "flush": false, 00:15:52.236 "reset": true, 00:15:52.236 "nvme_admin": false, 00:15:52.236 "nvme_io": false, 00:15:52.236 "nvme_io_md": false, 00:15:52.236 "write_zeroes": true, 00:15:52.236 "zcopy": false, 00:15:52.236 "get_zone_info": false, 00:15:52.236 "zone_management": false, 00:15:52.236 "zone_append": false, 00:15:52.236 "compare": false, 00:15:52.236 "compare_and_write": false, 00:15:52.236 "abort": false, 00:15:52.236 "seek_hole": false, 00:15:52.236 "seek_data": false, 00:15:52.236 "copy": false, 00:15:52.236 "nvme_iov_md": false 00:15:52.236 }, 00:15:52.236 "memory_domains": [ 00:15:52.236 { 00:15:52.236 "dma_device_id": "system", 00:15:52.236 "dma_device_type": 1 00:15:52.236 }, 00:15:52.236 { 00:15:52.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.236 "dma_device_type": 2 00:15:52.236 }, 00:15:52.236 { 00:15:52.236 "dma_device_id": "system", 00:15:52.236 "dma_device_type": 1 00:15:52.236 }, 00:15:52.236 { 00:15:52.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.236 "dma_device_type": 2 00:15:52.236 } 00:15:52.236 ], 00:15:52.236 "driver_specific": { 00:15:52.236 "raid": { 00:15:52.236 "uuid": "e7467e76-c1eb-4426-ab50-e1b1800d8917", 00:15:52.236 "strip_size_kb": 0, 00:15:52.236 "state": "online", 00:15:52.236 "raid_level": "raid1", 00:15:52.236 "superblock": true, 00:15:52.236 "num_base_bdevs": 2, 00:15:52.236 "num_base_bdevs_discovered": 2, 00:15:52.236 "num_base_bdevs_operational": 2, 00:15:52.236 "base_bdevs_list": [ 00:15:52.236 { 00:15:52.236 "name": "BaseBdev1", 00:15:52.236 "uuid": "807515e8-5039-4449-905a-187cd152cd90", 00:15:52.236 "is_configured": true, 00:15:52.236 "data_offset": 256, 00:15:52.236 "data_size": 7936 00:15:52.236 }, 00:15:52.236 { 00:15:52.236 "name": "BaseBdev2", 00:15:52.236 "uuid": "85a70d93-77ec-4131-adc1-50432a1ce0d6", 00:15:52.236 "is_configured": true, 00:15:52.236 "data_offset": 256, 00:15:52.236 "data_size": 7936 00:15:52.236 } 00:15:52.236 ] 00:15:52.236 } 00:15:52.236 } 00:15:52.236 }' 00:15:52.236 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:52.495 BaseBdev2' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.495 [2024-11-26 15:31:50.844785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.495 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.496 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.496 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.496 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:52.496 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.496 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.496 "name": "Existed_Raid", 00:15:52.496 "uuid": "e7467e76-c1eb-4426-ab50-e1b1800d8917", 00:15:52.496 "strip_size_kb": 0, 00:15:52.496 "state": "online", 00:15:52.496 "raid_level": "raid1", 00:15:52.496 "superblock": true, 00:15:52.496 "num_base_bdevs": 2, 00:15:52.496 "num_base_bdevs_discovered": 1, 00:15:52.496 "num_base_bdevs_operational": 1, 00:15:52.496 "base_bdevs_list": [ 00:15:52.496 { 00:15:52.496 "name": null, 00:15:52.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.496 "is_configured": false, 00:15:52.496 "data_offset": 0, 00:15:52.496 "data_size": 7936 00:15:52.496 }, 00:15:52.496 { 00:15:52.496 "name": "BaseBdev2", 00:15:52.496 "uuid": "85a70d93-77ec-4131-adc1-50432a1ce0d6", 00:15:52.496 "is_configured": true, 00:15:52.496 "data_offset": 256, 00:15:52.496 "data_size": 7936 00:15:52.496 } 00:15:52.496 ] 00:15:52.496 }' 00:15:52.496 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.496 15:31:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:53.063 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.064 [2024-11-26 15:31:51.370849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.064 [2024-11-26 15:31:51.370963] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.064 [2024-11-26 15:31:51.393189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.064 [2024-11-26 15:31:51.393245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.064 [2024-11-26 15:31:51.393255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 99065 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99065 ']' 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99065 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99065 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.064 killing process with pid 99065 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99065' 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99065 00:15:53.064 [2024-11-26 15:31:51.490921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.064 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99065 00:15:53.064 [2024-11-26 15:31:51.492486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.633 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:53.633 00:15:53.633 real 0m4.019s 00:15:53.633 user 0m6.148s 00:15:53.633 sys 0m0.949s 00:15:53.633 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.633 15:31:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.633 ************************************ 00:15:53.633 END TEST raid_state_function_test_sb_md_separate 00:15:53.633 ************************************ 00:15:53.633 15:31:51 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:53.633 15:31:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:53.633 15:31:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.633 15:31:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.633 ************************************ 00:15:53.633 START TEST raid_superblock_test_md_separate 00:15:53.633 ************************************ 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=99301 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 99301 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99301 ']' 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.633 15:31:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.633 [2024-11-26 15:31:52.007833] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:53.633 [2024-11-26 15:31:52.007957] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99301 ] 00:15:53.892 [2024-11-26 15:31:52.147758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:53.892 [2024-11-26 15:31:52.186674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.892 [2024-11-26 15:31:52.225714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.892 [2024-11-26 15:31:52.302468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.892 [2024-11-26 15:31:52.302505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.461 malloc1 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.461 [2024-11-26 15:31:52.843144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.461 [2024-11-26 15:31:52.843216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.461 [2024-11-26 15:31:52.843239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.461 [2024-11-26 15:31:52.843248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.461 [2024-11-26 15:31:52.845488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.461 [2024-11-26 15:31:52.845521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.461 pt1 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.461 malloc2 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.461 [2024-11-26 15:31:52.878924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.461 [2024-11-26 15:31:52.878973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.461 [2024-11-26 15:31:52.878991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.461 [2024-11-26 15:31:52.879001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.461 [2024-11-26 15:31:52.881174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.461 [2024-11-26 15:31:52.881214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.461 pt2 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.461 [2024-11-26 15:31:52.890948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.461 [2024-11-26 15:31:52.893114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.461 [2024-11-26 15:31:52.893277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:54.461 [2024-11-26 15:31:52.893292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:54.461 [2024-11-26 15:31:52.893374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:54.461 [2024-11-26 15:31:52.893481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:54.461 [2024-11-26 15:31:52.893519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:54.461 [2024-11-26 15:31:52.893603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.461 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.462 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.721 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.721 "name": "raid_bdev1", 00:15:54.721 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:54.721 "strip_size_kb": 0, 00:15:54.721 "state": "online", 00:15:54.721 "raid_level": "raid1", 00:15:54.721 "superblock": true, 00:15:54.721 "num_base_bdevs": 2, 00:15:54.721 "num_base_bdevs_discovered": 2, 00:15:54.721 "num_base_bdevs_operational": 2, 00:15:54.721 "base_bdevs_list": [ 00:15:54.721 { 00:15:54.721 "name": "pt1", 00:15:54.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.721 "is_configured": true, 00:15:54.721 "data_offset": 256, 00:15:54.721 "data_size": 7936 00:15:54.721 }, 00:15:54.721 { 00:15:54.721 "name": "pt2", 00:15:54.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.721 "is_configured": true, 00:15:54.721 "data_offset": 256, 00:15:54.721 "data_size": 7936 00:15:54.721 } 00:15:54.721 ] 00:15:54.721 }' 00:15:54.721 15:31:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.721 15:31:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 [2024-11-26 15:31:53.395393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.981 "name": "raid_bdev1", 00:15:54.981 "aliases": [ 00:15:54.981 "54e3ac35-b9cf-4881-94f9-02b21aa66791" 00:15:54.981 ], 00:15:54.981 "product_name": "Raid Volume", 00:15:54.981 "block_size": 4096, 00:15:54.981 "num_blocks": 7936, 00:15:54.981 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:54.981 "md_size": 32, 00:15:54.981 "md_interleave": false, 00:15:54.981 "dif_type": 0, 00:15:54.981 "assigned_rate_limits": { 00:15:54.981 "rw_ios_per_sec": 0, 00:15:54.981 "rw_mbytes_per_sec": 0, 00:15:54.981 "r_mbytes_per_sec": 0, 00:15:54.981 "w_mbytes_per_sec": 0 00:15:54.981 }, 00:15:54.981 "claimed": false, 00:15:54.981 "zoned": false, 00:15:54.981 "supported_io_types": { 00:15:54.981 "read": true, 00:15:54.981 "write": true, 00:15:54.981 "unmap": false, 00:15:54.981 "flush": false, 00:15:54.981 "reset": true, 00:15:54.981 "nvme_admin": false, 00:15:54.981 "nvme_io": false, 00:15:54.981 "nvme_io_md": false, 00:15:54.981 "write_zeroes": true, 00:15:54.981 "zcopy": false, 00:15:54.981 "get_zone_info": false, 00:15:54.981 "zone_management": false, 00:15:54.981 "zone_append": false, 00:15:54.981 "compare": false, 00:15:54.981 "compare_and_write": false, 00:15:54.981 "abort": false, 00:15:54.981 "seek_hole": false, 00:15:54.981 "seek_data": false, 00:15:54.981 "copy": false, 00:15:54.981 "nvme_iov_md": false 00:15:54.981 }, 00:15:54.981 "memory_domains": [ 00:15:54.981 { 00:15:54.981 "dma_device_id": "system", 00:15:54.981 "dma_device_type": 1 00:15:54.981 }, 00:15:54.981 { 00:15:54.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.981 "dma_device_type": 2 00:15:54.981 }, 00:15:54.981 { 00:15:54.981 "dma_device_id": "system", 00:15:54.981 "dma_device_type": 1 00:15:54.981 }, 00:15:54.981 { 00:15:54.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.981 "dma_device_type": 2 00:15:54.981 } 00:15:54.981 ], 00:15:54.981 "driver_specific": { 00:15:54.981 "raid": { 00:15:54.981 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:54.981 "strip_size_kb": 0, 00:15:54.981 "state": "online", 00:15:54.981 "raid_level": "raid1", 00:15:54.981 "superblock": true, 00:15:54.981 "num_base_bdevs": 2, 00:15:54.981 "num_base_bdevs_discovered": 2, 00:15:54.981 "num_base_bdevs_operational": 2, 00:15:54.981 "base_bdevs_list": [ 00:15:54.981 { 00:15:54.981 "name": "pt1", 00:15:54.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.981 "is_configured": true, 00:15:54.981 "data_offset": 256, 00:15:54.981 "data_size": 7936 00:15:54.981 }, 00:15:54.981 { 00:15:54.981 "name": "pt2", 00:15:54.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.981 "is_configured": true, 00:15:54.981 "data_offset": 256, 00:15:54.981 "data_size": 7936 00:15:54.981 } 00:15:54.981 ] 00:15:54.981 } 00:15:54.981 } 00:15:54.981 }' 00:15:54.981 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:55.241 pt2' 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.241 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.242 [2024-11-26 15:31:53.607338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=54e3ac35-b9cf-4881-94f9-02b21aa66791 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 54e3ac35-b9cf-4881-94f9-02b21aa66791 ']' 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.242 [2024-11-26 15:31:53.651111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.242 [2024-11-26 15:31:53.651134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.242 [2024-11-26 15:31:53.651240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.242 [2024-11-26 15:31:53.651296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.242 [2024-11-26 15:31:53.651310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.242 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 [2024-11-26 15:31:53.791163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:55.502 [2024-11-26 15:31:53.793307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:55.502 [2024-11-26 15:31:53.793370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:55.502 [2024-11-26 15:31:53.793408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:55.502 [2024-11-26 15:31:53.793421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.502 [2024-11-26 15:31:53.793431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:15:55.502 request: 00:15:55.502 { 00:15:55.502 "name": "raid_bdev1", 00:15:55.502 "raid_level": "raid1", 00:15:55.502 "base_bdevs": [ 00:15:55.502 "malloc1", 00:15:55.502 "malloc2" 00:15:55.502 ], 00:15:55.502 "superblock": false, 00:15:55.502 "method": "bdev_raid_create", 00:15:55.502 "req_id": 1 00:15:55.502 } 00:15:55.502 Got JSON-RPC error response 00:15:55.502 response: 00:15:55.502 { 00:15:55.502 "code": -17, 00:15:55.502 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:55.502 } 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:55.502 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.503 [2024-11-26 15:31:53.859149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.503 [2024-11-26 15:31:53.859198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.503 [2024-11-26 15:31:53.859212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.503 [2024-11-26 15:31:53.859225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.503 [2024-11-26 15:31:53.861360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.503 [2024-11-26 15:31:53.861391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.503 [2024-11-26 15:31:53.861428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.503 [2024-11-26 15:31:53.861458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.503 pt1 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.503 "name": "raid_bdev1", 00:15:55.503 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:55.503 "strip_size_kb": 0, 00:15:55.503 "state": "configuring", 00:15:55.503 "raid_level": "raid1", 00:15:55.503 "superblock": true, 00:15:55.503 "num_base_bdevs": 2, 00:15:55.503 "num_base_bdevs_discovered": 1, 00:15:55.503 "num_base_bdevs_operational": 2, 00:15:55.503 "base_bdevs_list": [ 00:15:55.503 { 00:15:55.503 "name": "pt1", 00:15:55.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.503 "is_configured": true, 00:15:55.503 "data_offset": 256, 00:15:55.503 "data_size": 7936 00:15:55.503 }, 00:15:55.503 { 00:15:55.503 "name": null, 00:15:55.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.503 "is_configured": false, 00:15:55.503 "data_offset": 256, 00:15:55.503 "data_size": 7936 00:15:55.503 } 00:15:55.503 ] 00:15:55.503 }' 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.503 15:31:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 [2024-11-26 15:31:54.331281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.073 [2024-11-26 15:31:54.331329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.073 [2024-11-26 15:31:54.331347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:56.073 [2024-11-26 15:31:54.331357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.073 [2024-11-26 15:31:54.331496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.073 [2024-11-26 15:31:54.331511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.073 [2024-11-26 15:31:54.331545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:56.073 [2024-11-26 15:31:54.331561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.073 [2024-11-26 15:31:54.331627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:56.073 [2024-11-26 15:31:54.331637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:56.073 [2024-11-26 15:31:54.331696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:56.073 [2024-11-26 15:31:54.331780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:56.073 [2024-11-26 15:31:54.331787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:56.073 [2024-11-26 15:31:54.331844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.073 pt2 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.074 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.074 "name": "raid_bdev1", 00:15:56.074 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:56.074 "strip_size_kb": 0, 00:15:56.074 "state": "online", 00:15:56.074 "raid_level": "raid1", 00:15:56.074 "superblock": true, 00:15:56.074 "num_base_bdevs": 2, 00:15:56.074 "num_base_bdevs_discovered": 2, 00:15:56.074 "num_base_bdevs_operational": 2, 00:15:56.074 "base_bdevs_list": [ 00:15:56.074 { 00:15:56.074 "name": "pt1", 00:15:56.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.074 "is_configured": true, 00:15:56.074 "data_offset": 256, 00:15:56.074 "data_size": 7936 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "name": "pt2", 00:15:56.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.074 "is_configured": true, 00:15:56.074 "data_offset": 256, 00:15:56.074 "data_size": 7936 00:15:56.074 } 00:15:56.074 ] 00:15:56.074 }' 00:15:56.074 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.074 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.334 [2024-11-26 15:31:54.731628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.334 "name": "raid_bdev1", 00:15:56.334 "aliases": [ 00:15:56.334 "54e3ac35-b9cf-4881-94f9-02b21aa66791" 00:15:56.334 ], 00:15:56.334 "product_name": "Raid Volume", 00:15:56.334 "block_size": 4096, 00:15:56.334 "num_blocks": 7936, 00:15:56.334 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:56.334 "md_size": 32, 00:15:56.334 "md_interleave": false, 00:15:56.334 "dif_type": 0, 00:15:56.334 "assigned_rate_limits": { 00:15:56.334 "rw_ios_per_sec": 0, 00:15:56.334 "rw_mbytes_per_sec": 0, 00:15:56.334 "r_mbytes_per_sec": 0, 00:15:56.334 "w_mbytes_per_sec": 0 00:15:56.334 }, 00:15:56.334 "claimed": false, 00:15:56.334 "zoned": false, 00:15:56.334 "supported_io_types": { 00:15:56.334 "read": true, 00:15:56.334 "write": true, 00:15:56.334 "unmap": false, 00:15:56.334 "flush": false, 00:15:56.334 "reset": true, 00:15:56.334 "nvme_admin": false, 00:15:56.334 "nvme_io": false, 00:15:56.334 "nvme_io_md": false, 00:15:56.334 "write_zeroes": true, 00:15:56.334 "zcopy": false, 00:15:56.334 "get_zone_info": false, 00:15:56.334 "zone_management": false, 00:15:56.334 "zone_append": false, 00:15:56.334 "compare": false, 00:15:56.334 "compare_and_write": false, 00:15:56.334 "abort": false, 00:15:56.334 "seek_hole": false, 00:15:56.334 "seek_data": false, 00:15:56.334 "copy": false, 00:15:56.334 "nvme_iov_md": false 00:15:56.334 }, 00:15:56.334 "memory_domains": [ 00:15:56.334 { 00:15:56.334 "dma_device_id": "system", 00:15:56.334 "dma_device_type": 1 00:15:56.334 }, 00:15:56.334 { 00:15:56.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.334 "dma_device_type": 2 00:15:56.334 }, 00:15:56.334 { 00:15:56.334 "dma_device_id": "system", 00:15:56.334 "dma_device_type": 1 00:15:56.334 }, 00:15:56.334 { 00:15:56.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.334 "dma_device_type": 2 00:15:56.334 } 00:15:56.334 ], 00:15:56.334 "driver_specific": { 00:15:56.334 "raid": { 00:15:56.334 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:56.334 "strip_size_kb": 0, 00:15:56.334 "state": "online", 00:15:56.334 "raid_level": "raid1", 00:15:56.334 "superblock": true, 00:15:56.334 "num_base_bdevs": 2, 00:15:56.334 "num_base_bdevs_discovered": 2, 00:15:56.334 "num_base_bdevs_operational": 2, 00:15:56.334 "base_bdevs_list": [ 00:15:56.334 { 00:15:56.334 "name": "pt1", 00:15:56.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.334 "is_configured": true, 00:15:56.334 "data_offset": 256, 00:15:56.334 "data_size": 7936 00:15:56.334 }, 00:15:56.334 { 00:15:56.334 "name": "pt2", 00:15:56.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.334 "is_configured": true, 00:15:56.334 "data_offset": 256, 00:15:56.334 "data_size": 7936 00:15:56.334 } 00:15:56.334 ] 00:15:56.334 } 00:15:56.334 } 00:15:56.334 }' 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:56.334 pt2' 00:15:56.334 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.594 [2024-11-26 15:31:54.963688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 54e3ac35-b9cf-4881-94f9-02b21aa66791 '!=' 54e3ac35-b9cf-4881-94f9-02b21aa66791 ']' 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.594 [2024-11-26 15:31:54.991472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.594 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.595 15:31:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.595 "name": "raid_bdev1", 00:15:56.595 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:56.595 "strip_size_kb": 0, 00:15:56.595 "state": "online", 00:15:56.595 "raid_level": "raid1", 00:15:56.595 "superblock": true, 00:15:56.595 "num_base_bdevs": 2, 00:15:56.595 "num_base_bdevs_discovered": 1, 00:15:56.595 "num_base_bdevs_operational": 1, 00:15:56.595 "base_bdevs_list": [ 00:15:56.595 { 00:15:56.595 "name": null, 00:15:56.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.595 "is_configured": false, 00:15:56.595 "data_offset": 0, 00:15:56.595 "data_size": 7936 00:15:56.595 }, 00:15:56.595 { 00:15:56.595 "name": "pt2", 00:15:56.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.595 "is_configured": true, 00:15:56.595 "data_offset": 256, 00:15:56.595 "data_size": 7936 00:15:56.595 } 00:15:56.595 ] 00:15:56.595 }' 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.595 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.164 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.164 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.164 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.164 [2024-11-26 15:31:55.411573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.164 [2024-11-26 15:31:55.411605] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.164 [2024-11-26 15:31:55.411661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.164 [2024-11-26 15:31:55.411696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.165 [2024-11-26 15:31:55.411707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 [2024-11-26 15:31:55.483592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.165 [2024-11-26 15:31:55.483634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.165 [2024-11-26 15:31:55.483647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:57.165 [2024-11-26 15:31:55.483659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.165 [2024-11-26 15:31:55.485944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.165 [2024-11-26 15:31:55.485980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.165 [2024-11-26 15:31:55.486018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.165 [2024-11-26 15:31:55.486047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.165 [2024-11-26 15:31:55.486098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:57.165 [2024-11-26 15:31:55.486118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:57.165 [2024-11-26 15:31:55.486216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:57.165 [2024-11-26 15:31:55.486299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:57.165 [2024-11-26 15:31:55.486307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:57.165 [2024-11-26 15:31:55.486366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.165 pt2 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.165 "name": "raid_bdev1", 00:15:57.165 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:57.165 "strip_size_kb": 0, 00:15:57.165 "state": "online", 00:15:57.165 "raid_level": "raid1", 00:15:57.165 "superblock": true, 00:15:57.165 "num_base_bdevs": 2, 00:15:57.165 "num_base_bdevs_discovered": 1, 00:15:57.165 "num_base_bdevs_operational": 1, 00:15:57.165 "base_bdevs_list": [ 00:15:57.165 { 00:15:57.165 "name": null, 00:15:57.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.165 "is_configured": false, 00:15:57.165 "data_offset": 256, 00:15:57.165 "data_size": 7936 00:15:57.165 }, 00:15:57.165 { 00:15:57.165 "name": "pt2", 00:15:57.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.165 "is_configured": true, 00:15:57.165 "data_offset": 256, 00:15:57.165 "data_size": 7936 00:15:57.165 } 00:15:57.165 ] 00:15:57.165 }' 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.165 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.735 [2024-11-26 15:31:55.939700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.735 [2024-11-26 15:31:55.939724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.735 [2024-11-26 15:31:55.939773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.735 [2024-11-26 15:31:55.939808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.735 [2024-11-26 15:31:55.939815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.735 15:31:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.735 [2024-11-26 15:31:56.003734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:57.735 [2024-11-26 15:31:56.003773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.735 [2024-11-26 15:31:56.003791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:57.735 [2024-11-26 15:31:56.003800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.735 [2024-11-26 15:31:56.005984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.735 [2024-11-26 15:31:56.006012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:57.735 [2024-11-26 15:31:56.006053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:57.735 [2024-11-26 15:31:56.006076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.735 [2024-11-26 15:31:56.006153] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:57.735 [2024-11-26 15:31:56.006173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.735 [2024-11-26 15:31:56.006201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:15:57.735 [2024-11-26 15:31:56.006244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.735 [2024-11-26 15:31:56.006302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:57.735 [2024-11-26 15:31:56.006309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:57.735 [2024-11-26 15:31:56.006387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:57.735 [2024-11-26 15:31:56.006463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:57.735 [2024-11-26 15:31:56.006476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:57.736 [2024-11-26 15:31:56.006543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.736 pt1 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.736 "name": "raid_bdev1", 00:15:57.736 "uuid": "54e3ac35-b9cf-4881-94f9-02b21aa66791", 00:15:57.736 "strip_size_kb": 0, 00:15:57.736 "state": "online", 00:15:57.736 "raid_level": "raid1", 00:15:57.736 "superblock": true, 00:15:57.736 "num_base_bdevs": 2, 00:15:57.736 "num_base_bdevs_discovered": 1, 00:15:57.736 "num_base_bdevs_operational": 1, 00:15:57.736 "base_bdevs_list": [ 00:15:57.736 { 00:15:57.736 "name": null, 00:15:57.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.736 "is_configured": false, 00:15:57.736 "data_offset": 256, 00:15:57.736 "data_size": 7936 00:15:57.736 }, 00:15:57.736 { 00:15:57.736 "name": "pt2", 00:15:57.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.736 "is_configured": true, 00:15:57.736 "data_offset": 256, 00:15:57.736 "data_size": 7936 00:15:57.736 } 00:15:57.736 ] 00:15:57.736 }' 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.736 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.995 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:57.995 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:57.995 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.996 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.996 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.256 [2024-11-26 15:31:56.484060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 54e3ac35-b9cf-4881-94f9-02b21aa66791 '!=' 54e3ac35-b9cf-4881-94f9-02b21aa66791 ']' 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 99301 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99301 ']' 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 99301 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99301 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.256 killing process with pid 99301 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99301' 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 99301 00:15:58.256 [2024-11-26 15:31:56.555484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.256 [2024-11-26 15:31:56.555557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.256 [2024-11-26 15:31:56.555590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.256 [2024-11-26 15:31:56.555601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:58.256 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 99301 00:15:58.256 [2024-11-26 15:31:56.600731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.516 15:31:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:58.516 00:15:58.516 real 0m5.024s 00:15:58.516 user 0m7.990s 00:15:58.516 sys 0m1.220s 00:15:58.516 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.516 15:31:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.516 ************************************ 00:15:58.516 END TEST raid_superblock_test_md_separate 00:15:58.516 ************************************ 00:15:58.516 15:31:56 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:58.516 15:31:56 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:58.777 15:31:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:58.777 15:31:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.777 15:31:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.777 ************************************ 00:15:58.777 START TEST raid_rebuild_test_sb_md_separate 00:15:58.777 ************************************ 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=99618 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 99618 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99618 ']' 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.777 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.777 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.777 Zero copy mechanism will not be used. 00:15:58.777 [2024-11-26 15:31:57.115926] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:58.777 [2024-11-26 15:31:57.116061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99618 ] 00:15:59.142 [2024-11-26 15:31:57.257304] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:59.142 [2024-11-26 15:31:57.294238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.142 [2024-11-26 15:31:57.335377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.142 [2024-11-26 15:31:57.412021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.142 [2024-11-26 15:31:57.412063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 BaseBdev1_malloc 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 [2024-11-26 15:31:57.953201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.739 [2024-11-26 15:31:57.953274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.739 [2024-11-26 15:31:57.953299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.739 [2024-11-26 15:31:57.953315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.739 [2024-11-26 15:31:57.955586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.739 [2024-11-26 15:31:57.955626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.739 BaseBdev1 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 BaseBdev2_malloc 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 [2024-11-26 15:31:57.988938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:59.739 [2024-11-26 15:31:57.988990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.739 [2024-11-26 15:31:57.989009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.739 [2024-11-26 15:31:57.989020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.739 [2024-11-26 15:31:57.991118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.739 [2024-11-26 15:31:57.991151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.739 BaseBdev2 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 spare_malloc 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 spare_delay 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 [2024-11-26 15:31:58.057308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.739 [2024-11-26 15:31:58.057394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.739 [2024-11-26 15:31:58.057427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:59.739 [2024-11-26 15:31:58.057447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.739 [2024-11-26 15:31:58.060928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.739 [2024-11-26 15:31:58.060978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.739 spare 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 [2024-11-26 15:31:58.069349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.739 [2024-11-26 15:31:58.071860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.739 [2024-11-26 15:31:58.072048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:59.739 [2024-11-26 15:31:58.072067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:59.739 [2024-11-26 15:31:58.072154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:59.739 [2024-11-26 15:31:58.072321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:59.739 [2024-11-26 15:31:58.072344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:59.739 [2024-11-26 15:31:58.072463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.739 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.739 "name": "raid_bdev1", 00:15:59.739 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:15:59.739 "strip_size_kb": 0, 00:15:59.739 "state": "online", 00:15:59.739 "raid_level": "raid1", 00:15:59.739 "superblock": true, 00:15:59.739 "num_base_bdevs": 2, 00:15:59.739 "num_base_bdevs_discovered": 2, 00:15:59.739 "num_base_bdevs_operational": 2, 00:15:59.739 "base_bdevs_list": [ 00:15:59.739 { 00:15:59.739 "name": "BaseBdev1", 00:15:59.740 "uuid": "4cf60d6f-611b-539d-a455-d9f93f9fd3ce", 00:15:59.740 "is_configured": true, 00:15:59.740 "data_offset": 256, 00:15:59.740 "data_size": 7936 00:15:59.740 }, 00:15:59.740 { 00:15:59.740 "name": "BaseBdev2", 00:15:59.740 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:15:59.740 "is_configured": true, 00:15:59.740 "data_offset": 256, 00:15:59.740 "data_size": 7936 00:15:59.740 } 00:15:59.740 ] 00:15:59.740 }' 00:15:59.740 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.740 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.324 [2024-11-26 15:31:58.521691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.324 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:00.324 [2024-11-26 15:31:58.781511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:00.324 /dev/nbd0 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.583 1+0 records in 00:16:00.583 1+0 records out 00:16:00.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318423 s, 12.9 MB/s 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:00.583 15:31:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:01.162 7936+0 records in 00:16:01.162 7936+0 records out 00:16:01.162 32505856 bytes (33 MB, 31 MiB) copied, 0.584464 s, 55.6 MB/s 00:16:01.162 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:01.162 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.162 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:01.162 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:01.162 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:01.162 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.162 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:01.422 [2024-11-26 15:31:59.637871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.422 [2024-11-26 15:31:59.673937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.422 "name": "raid_bdev1", 00:16:01.422 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:01.422 "strip_size_kb": 0, 00:16:01.422 "state": "online", 00:16:01.422 "raid_level": "raid1", 00:16:01.422 "superblock": true, 00:16:01.422 "num_base_bdevs": 2, 00:16:01.422 "num_base_bdevs_discovered": 1, 00:16:01.422 "num_base_bdevs_operational": 1, 00:16:01.422 "base_bdevs_list": [ 00:16:01.422 { 00:16:01.422 "name": null, 00:16:01.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.422 "is_configured": false, 00:16:01.422 "data_offset": 0, 00:16:01.422 "data_size": 7936 00:16:01.422 }, 00:16:01.422 { 00:16:01.422 "name": "BaseBdev2", 00:16:01.422 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:01.422 "is_configured": true, 00:16:01.422 "data_offset": 256, 00:16:01.422 "data_size": 7936 00:16:01.422 } 00:16:01.422 ] 00:16:01.422 }' 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.422 15:31:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.682 15:32:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.682 15:32:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.682 15:32:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.682 [2024-11-26 15:32:00.146055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.682 [2024-11-26 15:32:00.150574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:16:01.682 15:32:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.682 15:32:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:01.682 [2024-11-26 15:32:00.152868] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.063 "name": "raid_bdev1", 00:16:03.063 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:03.063 "strip_size_kb": 0, 00:16:03.063 "state": "online", 00:16:03.063 "raid_level": "raid1", 00:16:03.063 "superblock": true, 00:16:03.063 "num_base_bdevs": 2, 00:16:03.063 "num_base_bdevs_discovered": 2, 00:16:03.063 "num_base_bdevs_operational": 2, 00:16:03.063 "process": { 00:16:03.063 "type": "rebuild", 00:16:03.063 "target": "spare", 00:16:03.063 "progress": { 00:16:03.063 "blocks": 2560, 00:16:03.063 "percent": 32 00:16:03.063 } 00:16:03.063 }, 00:16:03.063 "base_bdevs_list": [ 00:16:03.063 { 00:16:03.063 "name": "spare", 00:16:03.063 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:03.063 "is_configured": true, 00:16:03.063 "data_offset": 256, 00:16:03.063 "data_size": 7936 00:16:03.063 }, 00:16:03.063 { 00:16:03.063 "name": "BaseBdev2", 00:16:03.063 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:03.063 "is_configured": true, 00:16:03.063 "data_offset": 256, 00:16:03.063 "data_size": 7936 00:16:03.063 } 00:16:03.063 ] 00:16:03.063 }' 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.063 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.064 [2024-11-26 15:32:01.296245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.064 [2024-11-26 15:32:01.363423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:03.064 [2024-11-26 15:32:01.363487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.064 [2024-11-26 15:32:01.363501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.064 [2024-11-26 15:32:01.363521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.064 "name": "raid_bdev1", 00:16:03.064 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:03.064 "strip_size_kb": 0, 00:16:03.064 "state": "online", 00:16:03.064 "raid_level": "raid1", 00:16:03.064 "superblock": true, 00:16:03.064 "num_base_bdevs": 2, 00:16:03.064 "num_base_bdevs_discovered": 1, 00:16:03.064 "num_base_bdevs_operational": 1, 00:16:03.064 "base_bdevs_list": [ 00:16:03.064 { 00:16:03.064 "name": null, 00:16:03.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.064 "is_configured": false, 00:16:03.064 "data_offset": 0, 00:16:03.064 "data_size": 7936 00:16:03.064 }, 00:16:03.064 { 00:16:03.064 "name": "BaseBdev2", 00:16:03.064 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:03.064 "is_configured": true, 00:16:03.064 "data_offset": 256, 00:16:03.064 "data_size": 7936 00:16:03.064 } 00:16:03.064 ] 00:16:03.064 }' 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.064 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.631 "name": "raid_bdev1", 00:16:03.631 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:03.631 "strip_size_kb": 0, 00:16:03.631 "state": "online", 00:16:03.631 "raid_level": "raid1", 00:16:03.631 "superblock": true, 00:16:03.631 "num_base_bdevs": 2, 00:16:03.631 "num_base_bdevs_discovered": 1, 00:16:03.631 "num_base_bdevs_operational": 1, 00:16:03.631 "base_bdevs_list": [ 00:16:03.631 { 00:16:03.631 "name": null, 00:16:03.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.631 "is_configured": false, 00:16:03.631 "data_offset": 0, 00:16:03.631 "data_size": 7936 00:16:03.631 }, 00:16:03.631 { 00:16:03.631 "name": "BaseBdev2", 00:16:03.631 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:03.631 "is_configured": true, 00:16:03.631 "data_offset": 256, 00:16:03.631 "data_size": 7936 00:16:03.631 } 00:16:03.631 ] 00:16:03.631 }' 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.631 [2024-11-26 15:32:01.988977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.631 [2024-11-26 15:32:01.992237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:16:03.631 [2024-11-26 15:32:01.994396] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.631 15:32:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:04.573 15:32:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.573 15:32:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.573 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.834 "name": "raid_bdev1", 00:16:04.834 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:04.834 "strip_size_kb": 0, 00:16:04.834 "state": "online", 00:16:04.834 "raid_level": "raid1", 00:16:04.834 "superblock": true, 00:16:04.834 "num_base_bdevs": 2, 00:16:04.834 "num_base_bdevs_discovered": 2, 00:16:04.834 "num_base_bdevs_operational": 2, 00:16:04.834 "process": { 00:16:04.834 "type": "rebuild", 00:16:04.834 "target": "spare", 00:16:04.834 "progress": { 00:16:04.834 "blocks": 2560, 00:16:04.834 "percent": 32 00:16:04.834 } 00:16:04.834 }, 00:16:04.834 "base_bdevs_list": [ 00:16:04.834 { 00:16:04.834 "name": "spare", 00:16:04.834 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:04.834 "is_configured": true, 00:16:04.834 "data_offset": 256, 00:16:04.834 "data_size": 7936 00:16:04.834 }, 00:16:04.834 { 00:16:04.834 "name": "BaseBdev2", 00:16:04.834 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:04.834 "is_configured": true, 00:16:04.834 "data_offset": 256, 00:16:04.834 "data_size": 7936 00:16:04.834 } 00:16:04.834 ] 00:16:04.834 }' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:04.834 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=582 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.834 "name": "raid_bdev1", 00:16:04.834 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:04.834 "strip_size_kb": 0, 00:16:04.834 "state": "online", 00:16:04.834 "raid_level": "raid1", 00:16:04.834 "superblock": true, 00:16:04.834 "num_base_bdevs": 2, 00:16:04.834 "num_base_bdevs_discovered": 2, 00:16:04.834 "num_base_bdevs_operational": 2, 00:16:04.834 "process": { 00:16:04.834 "type": "rebuild", 00:16:04.834 "target": "spare", 00:16:04.834 "progress": { 00:16:04.834 "blocks": 2816, 00:16:04.834 "percent": 35 00:16:04.834 } 00:16:04.834 }, 00:16:04.834 "base_bdevs_list": [ 00:16:04.834 { 00:16:04.834 "name": "spare", 00:16:04.834 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:04.834 "is_configured": true, 00:16:04.834 "data_offset": 256, 00:16:04.834 "data_size": 7936 00:16:04.834 }, 00:16:04.834 { 00:16:04.834 "name": "BaseBdev2", 00:16:04.834 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:04.834 "is_configured": true, 00:16:04.834 "data_offset": 256, 00:16:04.834 "data_size": 7936 00:16:04.834 } 00:16:04.834 ] 00:16:04.834 }' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.834 15:32:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.216 "name": "raid_bdev1", 00:16:06.216 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:06.216 "strip_size_kb": 0, 00:16:06.216 "state": "online", 00:16:06.216 "raid_level": "raid1", 00:16:06.216 "superblock": true, 00:16:06.216 "num_base_bdevs": 2, 00:16:06.216 "num_base_bdevs_discovered": 2, 00:16:06.216 "num_base_bdevs_operational": 2, 00:16:06.216 "process": { 00:16:06.216 "type": "rebuild", 00:16:06.216 "target": "spare", 00:16:06.216 "progress": { 00:16:06.216 "blocks": 5632, 00:16:06.216 "percent": 70 00:16:06.216 } 00:16:06.216 }, 00:16:06.216 "base_bdevs_list": [ 00:16:06.216 { 00:16:06.216 "name": "spare", 00:16:06.216 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:06.216 "is_configured": true, 00:16:06.216 "data_offset": 256, 00:16:06.216 "data_size": 7936 00:16:06.216 }, 00:16:06.216 { 00:16:06.216 "name": "BaseBdev2", 00:16:06.216 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:06.216 "is_configured": true, 00:16:06.216 "data_offset": 256, 00:16:06.216 "data_size": 7936 00:16:06.216 } 00:16:06.216 ] 00:16:06.216 }' 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.216 15:32:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.786 [2024-11-26 15:32:05.119606] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:06.786 [2024-11-26 15:32:05.119682] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:06.786 [2024-11-26 15:32:05.119784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.046 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.046 "name": "raid_bdev1", 00:16:07.046 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:07.046 "strip_size_kb": 0, 00:16:07.046 "state": "online", 00:16:07.046 "raid_level": "raid1", 00:16:07.046 "superblock": true, 00:16:07.046 "num_base_bdevs": 2, 00:16:07.046 "num_base_bdevs_discovered": 2, 00:16:07.046 "num_base_bdevs_operational": 2, 00:16:07.046 "base_bdevs_list": [ 00:16:07.046 { 00:16:07.046 "name": "spare", 00:16:07.046 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:07.046 "is_configured": true, 00:16:07.046 "data_offset": 256, 00:16:07.046 "data_size": 7936 00:16:07.046 }, 00:16:07.046 { 00:16:07.046 "name": "BaseBdev2", 00:16:07.046 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:07.046 "is_configured": true, 00:16:07.046 "data_offset": 256, 00:16:07.046 "data_size": 7936 00:16:07.046 } 00:16:07.046 ] 00:16:07.047 }' 00:16:07.047 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.307 "name": "raid_bdev1", 00:16:07.307 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:07.307 "strip_size_kb": 0, 00:16:07.307 "state": "online", 00:16:07.307 "raid_level": "raid1", 00:16:07.307 "superblock": true, 00:16:07.307 "num_base_bdevs": 2, 00:16:07.307 "num_base_bdevs_discovered": 2, 00:16:07.307 "num_base_bdevs_operational": 2, 00:16:07.307 "base_bdevs_list": [ 00:16:07.307 { 00:16:07.307 "name": "spare", 00:16:07.307 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:07.307 "is_configured": true, 00:16:07.307 "data_offset": 256, 00:16:07.307 "data_size": 7936 00:16:07.307 }, 00:16:07.307 { 00:16:07.307 "name": "BaseBdev2", 00:16:07.307 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:07.307 "is_configured": true, 00:16:07.307 "data_offset": 256, 00:16:07.307 "data_size": 7936 00:16:07.307 } 00:16:07.307 ] 00:16:07.307 }' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.307 "name": "raid_bdev1", 00:16:07.307 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:07.307 "strip_size_kb": 0, 00:16:07.307 "state": "online", 00:16:07.307 "raid_level": "raid1", 00:16:07.307 "superblock": true, 00:16:07.307 "num_base_bdevs": 2, 00:16:07.307 "num_base_bdevs_discovered": 2, 00:16:07.307 "num_base_bdevs_operational": 2, 00:16:07.307 "base_bdevs_list": [ 00:16:07.307 { 00:16:07.307 "name": "spare", 00:16:07.307 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:07.307 "is_configured": true, 00:16:07.307 "data_offset": 256, 00:16:07.307 "data_size": 7936 00:16:07.307 }, 00:16:07.307 { 00:16:07.307 "name": "BaseBdev2", 00:16:07.307 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:07.307 "is_configured": true, 00:16:07.307 "data_offset": 256, 00:16:07.307 "data_size": 7936 00:16:07.307 } 00:16:07.307 ] 00:16:07.307 }' 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.307 15:32:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.876 [2024-11-26 15:32:06.168099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.876 [2024-11-26 15:32:06.168133] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.876 [2024-11-26 15:32:06.168242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.876 [2024-11-26 15:32:06.168326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.876 [2024-11-26 15:32:06.168337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:07.876 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:08.134 /dev/nbd0 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.134 1+0 records in 00:16:08.134 1+0 records out 00:16:08.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499933 s, 8.2 MB/s 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:08.134 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:08.393 /dev/nbd1 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.393 1+0 records in 00:16:08.393 1+0 records out 00:16:08.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429181 s, 9.5 MB/s 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.393 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.652 15:32:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.910 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.911 [2024-11-26 15:32:07.211206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:08.911 [2024-11-26 15:32:07.211258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.911 [2024-11-26 15:32:07.211285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:08.911 [2024-11-26 15:32:07.211294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.911 [2024-11-26 15:32:07.213543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.911 [2024-11-26 15:32:07.213580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:08.911 [2024-11-26 15:32:07.213636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:08.911 [2024-11-26 15:32:07.213686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.911 [2024-11-26 15:32:07.213793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.911 spare 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.911 [2024-11-26 15:32:07.313866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:08.911 [2024-11-26 15:32:07.313896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:08.911 [2024-11-26 15:32:07.314001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:16:08.911 [2024-11-26 15:32:07.314102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:08.911 [2024-11-26 15:32:07.314109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:08.911 [2024-11-26 15:32:07.314243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.911 "name": "raid_bdev1", 00:16:08.911 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:08.911 "strip_size_kb": 0, 00:16:08.911 "state": "online", 00:16:08.911 "raid_level": "raid1", 00:16:08.911 "superblock": true, 00:16:08.911 "num_base_bdevs": 2, 00:16:08.911 "num_base_bdevs_discovered": 2, 00:16:08.911 "num_base_bdevs_operational": 2, 00:16:08.911 "base_bdevs_list": [ 00:16:08.911 { 00:16:08.911 "name": "spare", 00:16:08.911 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:08.911 "is_configured": true, 00:16:08.911 "data_offset": 256, 00:16:08.911 "data_size": 7936 00:16:08.911 }, 00:16:08.911 { 00:16:08.911 "name": "BaseBdev2", 00:16:08.911 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:08.911 "is_configured": true, 00:16:08.911 "data_offset": 256, 00:16:08.911 "data_size": 7936 00:16:08.911 } 00:16:08.911 ] 00:16:08.911 }' 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.911 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.480 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.480 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.481 "name": "raid_bdev1", 00:16:09.481 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:09.481 "strip_size_kb": 0, 00:16:09.481 "state": "online", 00:16:09.481 "raid_level": "raid1", 00:16:09.481 "superblock": true, 00:16:09.481 "num_base_bdevs": 2, 00:16:09.481 "num_base_bdevs_discovered": 2, 00:16:09.481 "num_base_bdevs_operational": 2, 00:16:09.481 "base_bdevs_list": [ 00:16:09.481 { 00:16:09.481 "name": "spare", 00:16:09.481 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:09.481 "is_configured": true, 00:16:09.481 "data_offset": 256, 00:16:09.481 "data_size": 7936 00:16:09.481 }, 00:16:09.481 { 00:16:09.481 "name": "BaseBdev2", 00:16:09.481 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:09.481 "is_configured": true, 00:16:09.481 "data_offset": 256, 00:16:09.481 "data_size": 7936 00:16:09.481 } 00:16:09.481 ] 00:16:09.481 }' 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.481 [2024-11-26 15:32:07.919401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.481 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.741 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.741 "name": "raid_bdev1", 00:16:09.741 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:09.741 "strip_size_kb": 0, 00:16:09.741 "state": "online", 00:16:09.741 "raid_level": "raid1", 00:16:09.741 "superblock": true, 00:16:09.741 "num_base_bdevs": 2, 00:16:09.741 "num_base_bdevs_discovered": 1, 00:16:09.741 "num_base_bdevs_operational": 1, 00:16:09.741 "base_bdevs_list": [ 00:16:09.741 { 00:16:09.741 "name": null, 00:16:09.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.741 "is_configured": false, 00:16:09.741 "data_offset": 0, 00:16:09.741 "data_size": 7936 00:16:09.741 }, 00:16:09.741 { 00:16:09.741 "name": "BaseBdev2", 00:16:09.741 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:09.741 "is_configured": true, 00:16:09.741 "data_offset": 256, 00:16:09.741 "data_size": 7936 00:16:09.741 } 00:16:09.741 ] 00:16:09.741 }' 00:16:09.741 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.741 15:32:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.001 15:32:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.001 15:32:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.001 15:32:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.001 [2024-11-26 15:32:08.359548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.001 [2024-11-26 15:32:08.359667] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:10.001 [2024-11-26 15:32:08.359689] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:10.001 [2024-11-26 15:32:08.359716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.001 [2024-11-26 15:32:08.363942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:16:10.001 15:32:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.001 15:32:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:10.001 [2024-11-26 15:32:08.366108] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.949 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.209 "name": "raid_bdev1", 00:16:11.209 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:11.209 "strip_size_kb": 0, 00:16:11.209 "state": "online", 00:16:11.209 "raid_level": "raid1", 00:16:11.209 "superblock": true, 00:16:11.209 "num_base_bdevs": 2, 00:16:11.209 "num_base_bdevs_discovered": 2, 00:16:11.209 "num_base_bdevs_operational": 2, 00:16:11.209 "process": { 00:16:11.209 "type": "rebuild", 00:16:11.209 "target": "spare", 00:16:11.209 "progress": { 00:16:11.209 "blocks": 2560, 00:16:11.209 "percent": 32 00:16:11.209 } 00:16:11.209 }, 00:16:11.209 "base_bdevs_list": [ 00:16:11.209 { 00:16:11.209 "name": "spare", 00:16:11.209 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:11.209 "is_configured": true, 00:16:11.209 "data_offset": 256, 00:16:11.209 "data_size": 7936 00:16:11.209 }, 00:16:11.209 { 00:16:11.209 "name": "BaseBdev2", 00:16:11.209 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:11.209 "is_configured": true, 00:16:11.209 "data_offset": 256, 00:16:11.209 "data_size": 7936 00:16:11.209 } 00:16:11.209 ] 00:16:11.209 }' 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.209 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.209 [2024-11-26 15:32:09.515867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.210 [2024-11-26 15:32:09.575793] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:11.210 [2024-11-26 15:32:09.575850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.210 [2024-11-26 15:32:09.575865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.210 [2024-11-26 15:32:09.575875] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.210 "name": "raid_bdev1", 00:16:11.210 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:11.210 "strip_size_kb": 0, 00:16:11.210 "state": "online", 00:16:11.210 "raid_level": "raid1", 00:16:11.210 "superblock": true, 00:16:11.210 "num_base_bdevs": 2, 00:16:11.210 "num_base_bdevs_discovered": 1, 00:16:11.210 "num_base_bdevs_operational": 1, 00:16:11.210 "base_bdevs_list": [ 00:16:11.210 { 00:16:11.210 "name": null, 00:16:11.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.210 "is_configured": false, 00:16:11.210 "data_offset": 0, 00:16:11.210 "data_size": 7936 00:16:11.210 }, 00:16:11.210 { 00:16:11.210 "name": "BaseBdev2", 00:16:11.210 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:11.210 "is_configured": true, 00:16:11.210 "data_offset": 256, 00:16:11.210 "data_size": 7936 00:16:11.210 } 00:16:11.210 ] 00:16:11.210 }' 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.210 15:32:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.779 15:32:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:11.779 15:32:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.779 15:32:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.779 [2024-11-26 15:32:10.052572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:11.779 [2024-11-26 15:32:10.052629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.779 [2024-11-26 15:32:10.052656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:11.779 [2024-11-26 15:32:10.052668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.779 [2024-11-26 15:32:10.052900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.779 [2024-11-26 15:32:10.052917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:11.779 [2024-11-26 15:32:10.052967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:11.779 [2024-11-26 15:32:10.052987] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.779 [2024-11-26 15:32:10.052996] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:11.779 [2024-11-26 15:32:10.053018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.779 [2024-11-26 15:32:10.055865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:16:11.779 [2024-11-26 15:32:10.057995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.779 spare 00:16:11.779 15:32:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.779 15:32:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.720 "name": "raid_bdev1", 00:16:12.720 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:12.720 "strip_size_kb": 0, 00:16:12.720 "state": "online", 00:16:12.720 "raid_level": "raid1", 00:16:12.720 "superblock": true, 00:16:12.720 "num_base_bdevs": 2, 00:16:12.720 "num_base_bdevs_discovered": 2, 00:16:12.720 "num_base_bdevs_operational": 2, 00:16:12.720 "process": { 00:16:12.720 "type": "rebuild", 00:16:12.720 "target": "spare", 00:16:12.720 "progress": { 00:16:12.720 "blocks": 2560, 00:16:12.720 "percent": 32 00:16:12.720 } 00:16:12.720 }, 00:16:12.720 "base_bdevs_list": [ 00:16:12.720 { 00:16:12.720 "name": "spare", 00:16:12.720 "uuid": "4c0707da-7ec1-5703-b5fb-9926202ee47f", 00:16:12.720 "is_configured": true, 00:16:12.720 "data_offset": 256, 00:16:12.720 "data_size": 7936 00:16:12.720 }, 00:16:12.720 { 00:16:12.720 "name": "BaseBdev2", 00:16:12.720 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:12.720 "is_configured": true, 00:16:12.720 "data_offset": 256, 00:16:12.720 "data_size": 7936 00:16:12.720 } 00:16:12.720 ] 00:16:12.720 }' 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.720 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.720 [2024-11-26 15:32:11.183666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.980 [2024-11-26 15:32:11.267623] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.980 [2024-11-26 15:32:11.267678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.980 [2024-11-26 15:32:11.267696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.980 [2024-11-26 15:32:11.267703] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.980 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.980 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.980 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.980 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.981 "name": "raid_bdev1", 00:16:12.981 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:12.981 "strip_size_kb": 0, 00:16:12.981 "state": "online", 00:16:12.981 "raid_level": "raid1", 00:16:12.981 "superblock": true, 00:16:12.981 "num_base_bdevs": 2, 00:16:12.981 "num_base_bdevs_discovered": 1, 00:16:12.981 "num_base_bdevs_operational": 1, 00:16:12.981 "base_bdevs_list": [ 00:16:12.981 { 00:16:12.981 "name": null, 00:16:12.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.981 "is_configured": false, 00:16:12.981 "data_offset": 0, 00:16:12.981 "data_size": 7936 00:16:12.981 }, 00:16:12.981 { 00:16:12.981 "name": "BaseBdev2", 00:16:12.981 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:12.981 "is_configured": true, 00:16:12.981 "data_offset": 256, 00:16:12.981 "data_size": 7936 00:16:12.981 } 00:16:12.981 ] 00:16:12.981 }' 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.981 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.241 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.500 "name": "raid_bdev1", 00:16:13.500 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:13.500 "strip_size_kb": 0, 00:16:13.500 "state": "online", 00:16:13.500 "raid_level": "raid1", 00:16:13.500 "superblock": true, 00:16:13.500 "num_base_bdevs": 2, 00:16:13.500 "num_base_bdevs_discovered": 1, 00:16:13.500 "num_base_bdevs_operational": 1, 00:16:13.500 "base_bdevs_list": [ 00:16:13.500 { 00:16:13.500 "name": null, 00:16:13.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.500 "is_configured": false, 00:16:13.500 "data_offset": 0, 00:16:13.500 "data_size": 7936 00:16:13.500 }, 00:16:13.500 { 00:16:13.500 "name": "BaseBdev2", 00:16:13.500 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:13.500 "is_configured": true, 00:16:13.500 "data_offset": 256, 00:16:13.500 "data_size": 7936 00:16:13.500 } 00:16:13.500 ] 00:16:13.500 }' 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.500 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.500 [2024-11-26 15:32:11.848232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:13.500 [2024-11-26 15:32:11.848277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.501 [2024-11-26 15:32:11.848299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:13.501 [2024-11-26 15:32:11.848308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.501 [2024-11-26 15:32:11.848514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.501 [2024-11-26 15:32:11.848536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:13.501 [2024-11-26 15:32:11.848590] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:13.501 [2024-11-26 15:32:11.848607] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.501 [2024-11-26 15:32:11.848617] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.501 [2024-11-26 15:32:11.848626] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:13.501 BaseBdev1 00:16:13.501 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.501 15:32:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.440 "name": "raid_bdev1", 00:16:14.440 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:14.440 "strip_size_kb": 0, 00:16:14.440 "state": "online", 00:16:14.440 "raid_level": "raid1", 00:16:14.440 "superblock": true, 00:16:14.440 "num_base_bdevs": 2, 00:16:14.440 "num_base_bdevs_discovered": 1, 00:16:14.440 "num_base_bdevs_operational": 1, 00:16:14.440 "base_bdevs_list": [ 00:16:14.440 { 00:16:14.440 "name": null, 00:16:14.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.440 "is_configured": false, 00:16:14.440 "data_offset": 0, 00:16:14.440 "data_size": 7936 00:16:14.440 }, 00:16:14.440 { 00:16:14.440 "name": "BaseBdev2", 00:16:14.440 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:14.440 "is_configured": true, 00:16:14.440 "data_offset": 256, 00:16:14.440 "data_size": 7936 00:16:14.440 } 00:16:14.440 ] 00:16:14.440 }' 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.440 15:32:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.009 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.009 "name": "raid_bdev1", 00:16:15.009 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:15.010 "strip_size_kb": 0, 00:16:15.010 "state": "online", 00:16:15.010 "raid_level": "raid1", 00:16:15.010 "superblock": true, 00:16:15.010 "num_base_bdevs": 2, 00:16:15.010 "num_base_bdevs_discovered": 1, 00:16:15.010 "num_base_bdevs_operational": 1, 00:16:15.010 "base_bdevs_list": [ 00:16:15.010 { 00:16:15.010 "name": null, 00:16:15.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.010 "is_configured": false, 00:16:15.010 "data_offset": 0, 00:16:15.010 "data_size": 7936 00:16:15.010 }, 00:16:15.010 { 00:16:15.010 "name": "BaseBdev2", 00:16:15.010 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:15.010 "is_configured": true, 00:16:15.010 "data_offset": 256, 00:16:15.010 "data_size": 7936 00:16:15.010 } 00:16:15.010 ] 00:16:15.010 }' 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.010 [2024-11-26 15:32:13.452702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.010 [2024-11-26 15:32:13.452825] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.010 [2024-11-26 15:32:13.452841] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:15.010 request: 00:16:15.010 { 00:16:15.010 "base_bdev": "BaseBdev1", 00:16:15.010 "raid_bdev": "raid_bdev1", 00:16:15.010 "method": "bdev_raid_add_base_bdev", 00:16:15.010 "req_id": 1 00:16:15.010 } 00:16:15.010 Got JSON-RPC error response 00:16:15.010 response: 00:16:15.010 { 00:16:15.010 "code": -22, 00:16:15.010 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:15.010 } 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:15.010 15:32:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.391 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.391 "name": "raid_bdev1", 00:16:16.391 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:16.391 "strip_size_kb": 0, 00:16:16.391 "state": "online", 00:16:16.391 "raid_level": "raid1", 00:16:16.391 "superblock": true, 00:16:16.392 "num_base_bdevs": 2, 00:16:16.392 "num_base_bdevs_discovered": 1, 00:16:16.392 "num_base_bdevs_operational": 1, 00:16:16.392 "base_bdevs_list": [ 00:16:16.392 { 00:16:16.392 "name": null, 00:16:16.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.392 "is_configured": false, 00:16:16.392 "data_offset": 0, 00:16:16.392 "data_size": 7936 00:16:16.392 }, 00:16:16.392 { 00:16:16.392 "name": "BaseBdev2", 00:16:16.392 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:16.392 "is_configured": true, 00:16:16.392 "data_offset": 256, 00:16:16.392 "data_size": 7936 00:16:16.392 } 00:16:16.392 ] 00:16:16.392 }' 00:16:16.392 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.392 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.653 "name": "raid_bdev1", 00:16:16.653 "uuid": "116284f5-c060-4a6f-8707-0fe71e0246d5", 00:16:16.653 "strip_size_kb": 0, 00:16:16.653 "state": "online", 00:16:16.653 "raid_level": "raid1", 00:16:16.653 "superblock": true, 00:16:16.653 "num_base_bdevs": 2, 00:16:16.653 "num_base_bdevs_discovered": 1, 00:16:16.653 "num_base_bdevs_operational": 1, 00:16:16.653 "base_bdevs_list": [ 00:16:16.653 { 00:16:16.653 "name": null, 00:16:16.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.653 "is_configured": false, 00:16:16.653 "data_offset": 0, 00:16:16.653 "data_size": 7936 00:16:16.653 }, 00:16:16.653 { 00:16:16.653 "name": "BaseBdev2", 00:16:16.653 "uuid": "1af14577-b5d7-54a8-8919-76d3f01b1e91", 00:16:16.653 "is_configured": true, 00:16:16.653 "data_offset": 256, 00:16:16.653 "data_size": 7936 00:16:16.653 } 00:16:16.653 ] 00:16:16.653 }' 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.653 15:32:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 99618 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99618 ']' 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99618 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99618 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.653 killing process with pid 99618 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99618' 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99618 00:16:16.653 Received shutdown signal, test time was about 60.000000 seconds 00:16:16.653 00:16:16.653 Latency(us) 00:16:16.653 [2024-11-26T15:32:15.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.653 [2024-11-26T15:32:15.132Z] =================================================================================================================== 00:16:16.653 [2024-11-26T15:32:15.132Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:16.653 [2024-11-26 15:32:15.067212] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.653 [2024-11-26 15:32:15.067335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.653 [2024-11-26 15:32:15.067384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.653 [2024-11-26 15:32:15.067397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:16.653 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99618 00:16:16.913 [2024-11-26 15:32:15.128826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.174 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:17.174 00:16:17.174 real 0m18.437s 00:16:17.174 user 0m24.159s 00:16:17.174 sys 0m2.780s 00:16:17.174 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.174 15:32:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.174 ************************************ 00:16:17.174 END TEST raid_rebuild_test_sb_md_separate 00:16:17.174 ************************************ 00:16:17.174 15:32:15 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:17.174 15:32:15 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:17.174 15:32:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:17.174 15:32:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.174 15:32:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.174 ************************************ 00:16:17.174 START TEST raid_state_function_test_sb_md_interleaved 00:16:17.174 ************************************ 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=100292 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 100292' 00:16:17.174 Process raid pid: 100292 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 100292 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100292 ']' 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.174 15:32:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.174 [2024-11-26 15:32:15.627410] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:17.174 [2024-11-26 15:32:15.627548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.435 [2024-11-26 15:32:15.765497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:17.435 [2024-11-26 15:32:15.802635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.435 [2024-11-26 15:32:15.842695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.695 [2024-11-26 15:32:15.919441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.695 [2024-11-26 15:32:15.919484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.265 [2024-11-26 15:32:16.452085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.265 [2024-11-26 15:32:16.452130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.265 [2024-11-26 15:32:16.452143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.265 [2024-11-26 15:32:16.452150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.265 "name": "Existed_Raid", 00:16:18.265 "uuid": "405b411e-cccd-4a61-91a1-e88fe6b8f94c", 00:16:18.265 "strip_size_kb": 0, 00:16:18.265 "state": "configuring", 00:16:18.265 "raid_level": "raid1", 00:16:18.265 "superblock": true, 00:16:18.265 "num_base_bdevs": 2, 00:16:18.265 "num_base_bdevs_discovered": 0, 00:16:18.265 "num_base_bdevs_operational": 2, 00:16:18.265 "base_bdevs_list": [ 00:16:18.265 { 00:16:18.265 "name": "BaseBdev1", 00:16:18.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.265 "is_configured": false, 00:16:18.265 "data_offset": 0, 00:16:18.265 "data_size": 0 00:16:18.265 }, 00:16:18.265 { 00:16:18.265 "name": "BaseBdev2", 00:16:18.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.265 "is_configured": false, 00:16:18.265 "data_offset": 0, 00:16:18.265 "data_size": 0 00:16:18.265 } 00:16:18.265 ] 00:16:18.265 }' 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.265 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.525 [2024-11-26 15:32:16.932081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.525 [2024-11-26 15:32:16.932120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.525 [2024-11-26 15:32:16.944115] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.525 [2024-11-26 15:32:16.944146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.525 [2024-11-26 15:32:16.944156] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.525 [2024-11-26 15:32:16.944162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.525 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.526 [2024-11-26 15:32:16.971374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.526 BaseBdev1 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.526 15:32:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.526 [ 00:16:18.526 { 00:16:18.526 "name": "BaseBdev1", 00:16:18.526 "aliases": [ 00:16:18.526 "f8432741-b151-4450-ad40-856aa0ee0839" 00:16:18.526 ], 00:16:18.526 "product_name": "Malloc disk", 00:16:18.526 "block_size": 4128, 00:16:18.526 "num_blocks": 8192, 00:16:18.526 "uuid": "f8432741-b151-4450-ad40-856aa0ee0839", 00:16:18.526 "md_size": 32, 00:16:18.526 "md_interleave": true, 00:16:18.526 "dif_type": 0, 00:16:18.526 "assigned_rate_limits": { 00:16:18.526 "rw_ios_per_sec": 0, 00:16:18.785 "rw_mbytes_per_sec": 0, 00:16:18.785 "r_mbytes_per_sec": 0, 00:16:18.785 "w_mbytes_per_sec": 0 00:16:18.785 }, 00:16:18.785 "claimed": true, 00:16:18.785 "claim_type": "exclusive_write", 00:16:18.785 "zoned": false, 00:16:18.785 "supported_io_types": { 00:16:18.785 "read": true, 00:16:18.785 "write": true, 00:16:18.785 "unmap": true, 00:16:18.785 "flush": true, 00:16:18.785 "reset": true, 00:16:18.785 "nvme_admin": false, 00:16:18.785 "nvme_io": false, 00:16:18.785 "nvme_io_md": false, 00:16:18.785 "write_zeroes": true, 00:16:18.785 "zcopy": true, 00:16:18.785 "get_zone_info": false, 00:16:18.785 "zone_management": false, 00:16:18.785 "zone_append": false, 00:16:18.785 "compare": false, 00:16:18.785 "compare_and_write": false, 00:16:18.785 "abort": true, 00:16:18.785 "seek_hole": false, 00:16:18.785 "seek_data": false, 00:16:18.785 "copy": true, 00:16:18.785 "nvme_iov_md": false 00:16:18.785 }, 00:16:18.785 "memory_domains": [ 00:16:18.785 { 00:16:18.785 "dma_device_id": "system", 00:16:18.785 "dma_device_type": 1 00:16:18.785 }, 00:16:18.785 { 00:16:18.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.785 "dma_device_type": 2 00:16:18.785 } 00:16:18.785 ], 00:16:18.785 "driver_specific": {} 00:16:18.785 } 00:16:18.785 ] 00:16:18.785 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.785 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:18.785 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:18.785 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.785 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.786 "name": "Existed_Raid", 00:16:18.786 "uuid": "81a48017-e973-437e-8b92-e64e1a7af242", 00:16:18.786 "strip_size_kb": 0, 00:16:18.786 "state": "configuring", 00:16:18.786 "raid_level": "raid1", 00:16:18.786 "superblock": true, 00:16:18.786 "num_base_bdevs": 2, 00:16:18.786 "num_base_bdevs_discovered": 1, 00:16:18.786 "num_base_bdevs_operational": 2, 00:16:18.786 "base_bdevs_list": [ 00:16:18.786 { 00:16:18.786 "name": "BaseBdev1", 00:16:18.786 "uuid": "f8432741-b151-4450-ad40-856aa0ee0839", 00:16:18.786 "is_configured": true, 00:16:18.786 "data_offset": 256, 00:16:18.786 "data_size": 7936 00:16:18.786 }, 00:16:18.786 { 00:16:18.786 "name": "BaseBdev2", 00:16:18.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.786 "is_configured": false, 00:16:18.786 "data_offset": 0, 00:16:18.786 "data_size": 0 00:16:18.786 } 00:16:18.786 ] 00:16:18.786 }' 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.786 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.045 [2024-11-26 15:32:17.467519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.045 [2024-11-26 15:32:17.467630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.045 [2024-11-26 15:32:17.479596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.045 [2024-11-26 15:32:17.481716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.045 [2024-11-26 15:32:17.481786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:19.045 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.046 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.305 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.305 "name": "Existed_Raid", 00:16:19.305 "uuid": "f3e71467-eac8-45f5-a846-bc31b0bdd222", 00:16:19.305 "strip_size_kb": 0, 00:16:19.305 "state": "configuring", 00:16:19.305 "raid_level": "raid1", 00:16:19.305 "superblock": true, 00:16:19.305 "num_base_bdevs": 2, 00:16:19.305 "num_base_bdevs_discovered": 1, 00:16:19.305 "num_base_bdevs_operational": 2, 00:16:19.305 "base_bdevs_list": [ 00:16:19.305 { 00:16:19.305 "name": "BaseBdev1", 00:16:19.305 "uuid": "f8432741-b151-4450-ad40-856aa0ee0839", 00:16:19.305 "is_configured": true, 00:16:19.305 "data_offset": 256, 00:16:19.305 "data_size": 7936 00:16:19.305 }, 00:16:19.305 { 00:16:19.305 "name": "BaseBdev2", 00:16:19.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.305 "is_configured": false, 00:16:19.305 "data_offset": 0, 00:16:19.305 "data_size": 0 00:16:19.305 } 00:16:19.305 ] 00:16:19.305 }' 00:16:19.305 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.305 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.565 [2024-11-26 15:32:17.952614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.565 [2024-11-26 15:32:17.952878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:19.565 [2024-11-26 15:32:17.952942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:19.565 [2024-11-26 15:32:17.953089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:19.565 [2024-11-26 15:32:17.953217] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:19.565 [2024-11-26 15:32:17.953256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:16:19.565 id_bdev 0x617000007b00 00:16:19.565 [2024-11-26 15:32:17.953360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.565 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 [ 00:16:19.566 { 00:16:19.566 "name": "BaseBdev2", 00:16:19.566 "aliases": [ 00:16:19.566 "433a938e-e8d5-4443-8567-574b51893b18" 00:16:19.566 ], 00:16:19.566 "product_name": "Malloc disk", 00:16:19.566 "block_size": 4128, 00:16:19.566 "num_blocks": 8192, 00:16:19.566 "uuid": "433a938e-e8d5-4443-8567-574b51893b18", 00:16:19.566 "md_size": 32, 00:16:19.566 "md_interleave": true, 00:16:19.566 "dif_type": 0, 00:16:19.566 "assigned_rate_limits": { 00:16:19.566 "rw_ios_per_sec": 0, 00:16:19.566 "rw_mbytes_per_sec": 0, 00:16:19.566 "r_mbytes_per_sec": 0, 00:16:19.566 "w_mbytes_per_sec": 0 00:16:19.566 }, 00:16:19.566 "claimed": true, 00:16:19.566 "claim_type": "exclusive_write", 00:16:19.566 "zoned": false, 00:16:19.566 "supported_io_types": { 00:16:19.566 "read": true, 00:16:19.566 "write": true, 00:16:19.566 "unmap": true, 00:16:19.566 "flush": true, 00:16:19.566 "reset": true, 00:16:19.566 "nvme_admin": false, 00:16:19.566 "nvme_io": false, 00:16:19.566 "nvme_io_md": false, 00:16:19.566 "write_zeroes": true, 00:16:19.566 "zcopy": true, 00:16:19.566 "get_zone_info": false, 00:16:19.566 "zone_management": false, 00:16:19.566 "zone_append": false, 00:16:19.566 "compare": false, 00:16:19.566 "compare_and_write": false, 00:16:19.566 "abort": true, 00:16:19.566 "seek_hole": false, 00:16:19.566 "seek_data": false, 00:16:19.566 "copy": true, 00:16:19.566 "nvme_iov_md": false 00:16:19.566 }, 00:16:19.566 "memory_domains": [ 00:16:19.566 { 00:16:19.566 "dma_device_id": "system", 00:16:19.566 "dma_device_type": 1 00:16:19.566 }, 00:16:19.566 { 00:16:19.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.566 "dma_device_type": 2 00:16:19.566 } 00:16:19.566 ], 00:16:19.566 "driver_specific": {} 00:16:19.566 } 00:16:19.566 ] 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.566 15:32:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.566 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.825 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.825 "name": "Existed_Raid", 00:16:19.825 "uuid": "f3e71467-eac8-45f5-a846-bc31b0bdd222", 00:16:19.825 "strip_size_kb": 0, 00:16:19.825 "state": "online", 00:16:19.825 "raid_level": "raid1", 00:16:19.825 "superblock": true, 00:16:19.825 "num_base_bdevs": 2, 00:16:19.826 "num_base_bdevs_discovered": 2, 00:16:19.826 "num_base_bdevs_operational": 2, 00:16:19.826 "base_bdevs_list": [ 00:16:19.826 { 00:16:19.826 "name": "BaseBdev1", 00:16:19.826 "uuid": "f8432741-b151-4450-ad40-856aa0ee0839", 00:16:19.826 "is_configured": true, 00:16:19.826 "data_offset": 256, 00:16:19.826 "data_size": 7936 00:16:19.826 }, 00:16:19.826 { 00:16:19.826 "name": "BaseBdev2", 00:16:19.826 "uuid": "433a938e-e8d5-4443-8567-574b51893b18", 00:16:19.826 "is_configured": true, 00:16:19.826 "data_offset": 256, 00:16:19.826 "data_size": 7936 00:16:19.826 } 00:16:19.826 ] 00:16:19.826 }' 00:16:19.826 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.826 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.085 [2024-11-26 15:32:18.469078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.085 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:20.085 "name": "Existed_Raid", 00:16:20.085 "aliases": [ 00:16:20.085 "f3e71467-eac8-45f5-a846-bc31b0bdd222" 00:16:20.085 ], 00:16:20.085 "product_name": "Raid Volume", 00:16:20.085 "block_size": 4128, 00:16:20.085 "num_blocks": 7936, 00:16:20.085 "uuid": "f3e71467-eac8-45f5-a846-bc31b0bdd222", 00:16:20.085 "md_size": 32, 00:16:20.085 "md_interleave": true, 00:16:20.085 "dif_type": 0, 00:16:20.085 "assigned_rate_limits": { 00:16:20.085 "rw_ios_per_sec": 0, 00:16:20.085 "rw_mbytes_per_sec": 0, 00:16:20.085 "r_mbytes_per_sec": 0, 00:16:20.085 "w_mbytes_per_sec": 0 00:16:20.085 }, 00:16:20.085 "claimed": false, 00:16:20.085 "zoned": false, 00:16:20.085 "supported_io_types": { 00:16:20.086 "read": true, 00:16:20.086 "write": true, 00:16:20.086 "unmap": false, 00:16:20.086 "flush": false, 00:16:20.086 "reset": true, 00:16:20.086 "nvme_admin": false, 00:16:20.086 "nvme_io": false, 00:16:20.086 "nvme_io_md": false, 00:16:20.086 "write_zeroes": true, 00:16:20.086 "zcopy": false, 00:16:20.086 "get_zone_info": false, 00:16:20.086 "zone_management": false, 00:16:20.086 "zone_append": false, 00:16:20.086 "compare": false, 00:16:20.086 "compare_and_write": false, 00:16:20.086 "abort": false, 00:16:20.086 "seek_hole": false, 00:16:20.086 "seek_data": false, 00:16:20.086 "copy": false, 00:16:20.086 "nvme_iov_md": false 00:16:20.086 }, 00:16:20.086 "memory_domains": [ 00:16:20.086 { 00:16:20.086 "dma_device_id": "system", 00:16:20.086 "dma_device_type": 1 00:16:20.086 }, 00:16:20.086 { 00:16:20.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.086 "dma_device_type": 2 00:16:20.086 }, 00:16:20.086 { 00:16:20.086 "dma_device_id": "system", 00:16:20.086 "dma_device_type": 1 00:16:20.086 }, 00:16:20.086 { 00:16:20.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.086 "dma_device_type": 2 00:16:20.086 } 00:16:20.086 ], 00:16:20.086 "driver_specific": { 00:16:20.086 "raid": { 00:16:20.086 "uuid": "f3e71467-eac8-45f5-a846-bc31b0bdd222", 00:16:20.086 "strip_size_kb": 0, 00:16:20.086 "state": "online", 00:16:20.086 "raid_level": "raid1", 00:16:20.086 "superblock": true, 00:16:20.086 "num_base_bdevs": 2, 00:16:20.086 "num_base_bdevs_discovered": 2, 00:16:20.086 "num_base_bdevs_operational": 2, 00:16:20.086 "base_bdevs_list": [ 00:16:20.086 { 00:16:20.086 "name": "BaseBdev1", 00:16:20.086 "uuid": "f8432741-b151-4450-ad40-856aa0ee0839", 00:16:20.086 "is_configured": true, 00:16:20.086 "data_offset": 256, 00:16:20.086 "data_size": 7936 00:16:20.086 }, 00:16:20.086 { 00:16:20.086 "name": "BaseBdev2", 00:16:20.086 "uuid": "433a938e-e8d5-4443-8567-574b51893b18", 00:16:20.086 "is_configured": true, 00:16:20.086 "data_offset": 256, 00:16:20.086 "data_size": 7936 00:16:20.086 } 00:16:20.086 ] 00:16:20.086 } 00:16:20.086 } 00:16:20.086 }' 00:16:20.086 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.086 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:20.086 BaseBdev2' 00:16:20.086 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 [2024-11-26 15:32:18.708918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.346 "name": "Existed_Raid", 00:16:20.346 "uuid": "f3e71467-eac8-45f5-a846-bc31b0bdd222", 00:16:20.346 "strip_size_kb": 0, 00:16:20.346 "state": "online", 00:16:20.346 "raid_level": "raid1", 00:16:20.346 "superblock": true, 00:16:20.346 "num_base_bdevs": 2, 00:16:20.346 "num_base_bdevs_discovered": 1, 00:16:20.346 "num_base_bdevs_operational": 1, 00:16:20.346 "base_bdevs_list": [ 00:16:20.346 { 00:16:20.346 "name": null, 00:16:20.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.346 "is_configured": false, 00:16:20.346 "data_offset": 0, 00:16:20.346 "data_size": 7936 00:16:20.346 }, 00:16:20.346 { 00:16:20.346 "name": "BaseBdev2", 00:16:20.346 "uuid": "433a938e-e8d5-4443-8567-574b51893b18", 00:16:20.346 "is_configured": true, 00:16:20.346 "data_offset": 256, 00:16:20.346 "data_size": 7936 00:16:20.346 } 00:16:20.346 ] 00:16:20.346 }' 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.346 15:32:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.927 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.927 [2024-11-26 15:32:19.250556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.927 [2024-11-26 15:32:19.250663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.928 [2024-11-26 15:32:19.272314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.928 [2024-11-26 15:32:19.272428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.928 [2024-11-26 15:32:19.272466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 100292 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100292 ']' 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100292 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100292 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100292' 00:16:20.928 killing process with pid 100292 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100292 00:16:20.928 [2024-11-26 15:32:19.375100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.928 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100292 00:16:20.928 [2024-11-26 15:32:19.376721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.500 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:21.500 ************************************ 00:16:21.500 END TEST raid_state_function_test_sb_md_interleaved 00:16:21.500 ************************************ 00:16:21.500 00:16:21.500 real 0m4.180s 00:16:21.500 user 0m6.446s 00:16:21.500 sys 0m0.941s 00:16:21.500 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.500 15:32:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.500 15:32:19 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:21.500 15:32:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:21.500 15:32:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.500 15:32:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.500 ************************************ 00:16:21.500 START TEST raid_superblock_test_md_interleaved 00:16:21.500 ************************************ 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=100533 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 100533 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100533 ']' 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.500 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.501 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.501 15:32:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:21.501 [2024-11-26 15:32:19.880584] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:21.501 [2024-11-26 15:32:19.880772] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100533 ] 00:16:21.761 [2024-11-26 15:32:20.015618] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.761 [2024-11-26 15:32:20.055493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.761 [2024-11-26 15:32:20.096794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.761 [2024-11-26 15:32:20.173546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.761 [2024-11-26 15:32:20.173660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.332 malloc1 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.332 [2024-11-26 15:32:20.725329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:22.332 [2024-11-26 15:32:20.725484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.332 [2024-11-26 15:32:20.725533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:22.332 [2024-11-26 15:32:20.725564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.332 [2024-11-26 15:32:20.727717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.332 [2024-11-26 15:32:20.727786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:22.332 pt1 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.332 malloc2 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.332 [2024-11-26 15:32:20.764262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.332 [2024-11-26 15:32:20.764310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.332 [2024-11-26 15:32:20.764330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:22.332 [2024-11-26 15:32:20.764338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.332 [2024-11-26 15:32:20.766478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.332 [2024-11-26 15:32:20.766555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.332 pt2 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:22.332 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.333 [2024-11-26 15:32:20.776296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.333 [2024-11-26 15:32:20.778435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.333 [2024-11-26 15:32:20.778597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:22.333 [2024-11-26 15:32:20.778611] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:22.333 [2024-11-26 15:32:20.778697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:22.333 [2024-11-26 15:32:20.778773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:22.333 [2024-11-26 15:32:20.778783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:22.333 [2024-11-26 15:32:20.778863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.333 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.593 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.593 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.593 "name": "raid_bdev1", 00:16:22.593 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:22.593 "strip_size_kb": 0, 00:16:22.593 "state": "online", 00:16:22.593 "raid_level": "raid1", 00:16:22.593 "superblock": true, 00:16:22.593 "num_base_bdevs": 2, 00:16:22.593 "num_base_bdevs_discovered": 2, 00:16:22.593 "num_base_bdevs_operational": 2, 00:16:22.593 "base_bdevs_list": [ 00:16:22.593 { 00:16:22.593 "name": "pt1", 00:16:22.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.593 "is_configured": true, 00:16:22.594 "data_offset": 256, 00:16:22.594 "data_size": 7936 00:16:22.594 }, 00:16:22.594 { 00:16:22.594 "name": "pt2", 00:16:22.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.594 "is_configured": true, 00:16:22.594 "data_offset": 256, 00:16:22.594 "data_size": 7936 00:16:22.594 } 00:16:22.594 ] 00:16:22.594 }' 00:16:22.594 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.594 15:32:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:22.854 [2024-11-26 15:32:21.172686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:22.854 "name": "raid_bdev1", 00:16:22.854 "aliases": [ 00:16:22.854 "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1" 00:16:22.854 ], 00:16:22.854 "product_name": "Raid Volume", 00:16:22.854 "block_size": 4128, 00:16:22.854 "num_blocks": 7936, 00:16:22.854 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:22.854 "md_size": 32, 00:16:22.854 "md_interleave": true, 00:16:22.854 "dif_type": 0, 00:16:22.854 "assigned_rate_limits": { 00:16:22.854 "rw_ios_per_sec": 0, 00:16:22.854 "rw_mbytes_per_sec": 0, 00:16:22.854 "r_mbytes_per_sec": 0, 00:16:22.854 "w_mbytes_per_sec": 0 00:16:22.854 }, 00:16:22.854 "claimed": false, 00:16:22.854 "zoned": false, 00:16:22.854 "supported_io_types": { 00:16:22.854 "read": true, 00:16:22.854 "write": true, 00:16:22.854 "unmap": false, 00:16:22.854 "flush": false, 00:16:22.854 "reset": true, 00:16:22.854 "nvme_admin": false, 00:16:22.854 "nvme_io": false, 00:16:22.854 "nvme_io_md": false, 00:16:22.854 "write_zeroes": true, 00:16:22.854 "zcopy": false, 00:16:22.854 "get_zone_info": false, 00:16:22.854 "zone_management": false, 00:16:22.854 "zone_append": false, 00:16:22.854 "compare": false, 00:16:22.854 "compare_and_write": false, 00:16:22.854 "abort": false, 00:16:22.854 "seek_hole": false, 00:16:22.854 "seek_data": false, 00:16:22.854 "copy": false, 00:16:22.854 "nvme_iov_md": false 00:16:22.854 }, 00:16:22.854 "memory_domains": [ 00:16:22.854 { 00:16:22.854 "dma_device_id": "system", 00:16:22.854 "dma_device_type": 1 00:16:22.854 }, 00:16:22.854 { 00:16:22.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.854 "dma_device_type": 2 00:16:22.854 }, 00:16:22.854 { 00:16:22.854 "dma_device_id": "system", 00:16:22.854 "dma_device_type": 1 00:16:22.854 }, 00:16:22.854 { 00:16:22.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.854 "dma_device_type": 2 00:16:22.854 } 00:16:22.854 ], 00:16:22.854 "driver_specific": { 00:16:22.854 "raid": { 00:16:22.854 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:22.854 "strip_size_kb": 0, 00:16:22.854 "state": "online", 00:16:22.854 "raid_level": "raid1", 00:16:22.854 "superblock": true, 00:16:22.854 "num_base_bdevs": 2, 00:16:22.854 "num_base_bdevs_discovered": 2, 00:16:22.854 "num_base_bdevs_operational": 2, 00:16:22.854 "base_bdevs_list": [ 00:16:22.854 { 00:16:22.854 "name": "pt1", 00:16:22.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.854 "is_configured": true, 00:16:22.854 "data_offset": 256, 00:16:22.854 "data_size": 7936 00:16:22.854 }, 00:16:22.854 { 00:16:22.854 "name": "pt2", 00:16:22.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.854 "is_configured": true, 00:16:22.854 "data_offset": 256, 00:16:22.854 "data_size": 7936 00:16:22.854 } 00:16:22.854 ] 00:16:22.854 } 00:16:22.854 } 00:16:22.854 }' 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:22.854 pt2' 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:22.854 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.114 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.114 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:23.114 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:23.114 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 [2024-11-26 15:32:21.376652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1 ']' 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 [2024-11-26 15:32:21.424424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.115 [2024-11-26 15:32:21.424446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.115 [2024-11-26 15:32:21.424543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.115 [2024-11-26 15:32:21.424618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.115 [2024-11-26 15:32:21.424630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.115 [2024-11-26 15:32:21.564468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:23.115 [2024-11-26 15:32:21.566563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:23.115 [2024-11-26 15:32:21.566627] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:23.115 [2024-11-26 15:32:21.566668] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:23.115 [2024-11-26 15:32:21.566682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.115 [2024-11-26 15:32:21.566691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:16:23.115 request: 00:16:23.115 { 00:16:23.115 "name": "raid_bdev1", 00:16:23.115 "raid_level": "raid1", 00:16:23.115 "base_bdevs": [ 00:16:23.115 "malloc1", 00:16:23.115 "malloc2" 00:16:23.115 ], 00:16:23.115 "superblock": false, 00:16:23.115 "method": "bdev_raid_create", 00:16:23.115 "req_id": 1 00:16:23.115 } 00:16:23.115 Got JSON-RPC error response 00:16:23.115 response: 00:16:23.115 { 00:16:23.115 "code": -17, 00:16:23.115 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:23.115 } 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:23.115 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.375 [2024-11-26 15:32:21.628453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.375 [2024-11-26 15:32:21.628565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.375 [2024-11-26 15:32:21.628595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:23.375 [2024-11-26 15:32:21.628627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.375 [2024-11-26 15:32:21.630705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.375 [2024-11-26 15:32:21.630774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.375 [2024-11-26 15:32:21.630828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:23.375 [2024-11-26 15:32:21.630914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.375 pt1 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.375 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.375 "name": "raid_bdev1", 00:16:23.375 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:23.375 "strip_size_kb": 0, 00:16:23.375 "state": "configuring", 00:16:23.375 "raid_level": "raid1", 00:16:23.375 "superblock": true, 00:16:23.375 "num_base_bdevs": 2, 00:16:23.375 "num_base_bdevs_discovered": 1, 00:16:23.375 "num_base_bdevs_operational": 2, 00:16:23.375 "base_bdevs_list": [ 00:16:23.375 { 00:16:23.375 "name": "pt1", 00:16:23.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.376 "is_configured": true, 00:16:23.376 "data_offset": 256, 00:16:23.376 "data_size": 7936 00:16:23.376 }, 00:16:23.376 { 00:16:23.376 "name": null, 00:16:23.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.376 "is_configured": false, 00:16:23.376 "data_offset": 256, 00:16:23.376 "data_size": 7936 00:16:23.376 } 00:16:23.376 ] 00:16:23.376 }' 00:16:23.376 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.376 15:32:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.636 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:23.636 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:23.636 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.636 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.636 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.636 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.636 [2024-11-26 15:32:22.040577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.636 [2024-11-26 15:32:22.040671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.636 [2024-11-26 15:32:22.040691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:23.636 [2024-11-26 15:32:22.040701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.636 [2024-11-26 15:32:22.040796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.636 [2024-11-26 15:32:22.040808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.636 [2024-11-26 15:32:22.040842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:23.636 [2024-11-26 15:32:22.040859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.636 [2024-11-26 15:32:22.040918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:23.637 [2024-11-26 15:32:22.040929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:23.637 [2024-11-26 15:32:22.040990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:23.637 [2024-11-26 15:32:22.041047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:23.637 [2024-11-26 15:32:22.041054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:23.637 [2024-11-26 15:32:22.041102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.637 pt2 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.637 "name": "raid_bdev1", 00:16:23.637 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:23.637 "strip_size_kb": 0, 00:16:23.637 "state": "online", 00:16:23.637 "raid_level": "raid1", 00:16:23.637 "superblock": true, 00:16:23.637 "num_base_bdevs": 2, 00:16:23.637 "num_base_bdevs_discovered": 2, 00:16:23.637 "num_base_bdevs_operational": 2, 00:16:23.637 "base_bdevs_list": [ 00:16:23.637 { 00:16:23.637 "name": "pt1", 00:16:23.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.637 "is_configured": true, 00:16:23.637 "data_offset": 256, 00:16:23.637 "data_size": 7936 00:16:23.637 }, 00:16:23.637 { 00:16:23.637 "name": "pt2", 00:16:23.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.637 "is_configured": true, 00:16:23.637 "data_offset": 256, 00:16:23.637 "data_size": 7936 00:16:23.637 } 00:16:23.637 ] 00:16:23.637 }' 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.637 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.207 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.208 [2024-11-26 15:32:22.484954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:24.208 "name": "raid_bdev1", 00:16:24.208 "aliases": [ 00:16:24.208 "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1" 00:16:24.208 ], 00:16:24.208 "product_name": "Raid Volume", 00:16:24.208 "block_size": 4128, 00:16:24.208 "num_blocks": 7936, 00:16:24.208 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:24.208 "md_size": 32, 00:16:24.208 "md_interleave": true, 00:16:24.208 "dif_type": 0, 00:16:24.208 "assigned_rate_limits": { 00:16:24.208 "rw_ios_per_sec": 0, 00:16:24.208 "rw_mbytes_per_sec": 0, 00:16:24.208 "r_mbytes_per_sec": 0, 00:16:24.208 "w_mbytes_per_sec": 0 00:16:24.208 }, 00:16:24.208 "claimed": false, 00:16:24.208 "zoned": false, 00:16:24.208 "supported_io_types": { 00:16:24.208 "read": true, 00:16:24.208 "write": true, 00:16:24.208 "unmap": false, 00:16:24.208 "flush": false, 00:16:24.208 "reset": true, 00:16:24.208 "nvme_admin": false, 00:16:24.208 "nvme_io": false, 00:16:24.208 "nvme_io_md": false, 00:16:24.208 "write_zeroes": true, 00:16:24.208 "zcopy": false, 00:16:24.208 "get_zone_info": false, 00:16:24.208 "zone_management": false, 00:16:24.208 "zone_append": false, 00:16:24.208 "compare": false, 00:16:24.208 "compare_and_write": false, 00:16:24.208 "abort": false, 00:16:24.208 "seek_hole": false, 00:16:24.208 "seek_data": false, 00:16:24.208 "copy": false, 00:16:24.208 "nvme_iov_md": false 00:16:24.208 }, 00:16:24.208 "memory_domains": [ 00:16:24.208 { 00:16:24.208 "dma_device_id": "system", 00:16:24.208 "dma_device_type": 1 00:16:24.208 }, 00:16:24.208 { 00:16:24.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.208 "dma_device_type": 2 00:16:24.208 }, 00:16:24.208 { 00:16:24.208 "dma_device_id": "system", 00:16:24.208 "dma_device_type": 1 00:16:24.208 }, 00:16:24.208 { 00:16:24.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.208 "dma_device_type": 2 00:16:24.208 } 00:16:24.208 ], 00:16:24.208 "driver_specific": { 00:16:24.208 "raid": { 00:16:24.208 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:24.208 "strip_size_kb": 0, 00:16:24.208 "state": "online", 00:16:24.208 "raid_level": "raid1", 00:16:24.208 "superblock": true, 00:16:24.208 "num_base_bdevs": 2, 00:16:24.208 "num_base_bdevs_discovered": 2, 00:16:24.208 "num_base_bdevs_operational": 2, 00:16:24.208 "base_bdevs_list": [ 00:16:24.208 { 00:16:24.208 "name": "pt1", 00:16:24.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.208 "is_configured": true, 00:16:24.208 "data_offset": 256, 00:16:24.208 "data_size": 7936 00:16:24.208 }, 00:16:24.208 { 00:16:24.208 "name": "pt2", 00:16:24.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.208 "is_configured": true, 00:16:24.208 "data_offset": 256, 00:16:24.208 "data_size": 7936 00:16:24.208 } 00:16:24.208 ] 00:16:24.208 } 00:16:24.208 } 00:16:24.208 }' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:24.208 pt2' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.208 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:24.469 [2024-11-26 15:32:22.704995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1 '!=' a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1 ']' 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.469 [2024-11-26 15:32:22.752777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.469 "name": "raid_bdev1", 00:16:24.469 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:24.469 "strip_size_kb": 0, 00:16:24.469 "state": "online", 00:16:24.469 "raid_level": "raid1", 00:16:24.469 "superblock": true, 00:16:24.469 "num_base_bdevs": 2, 00:16:24.469 "num_base_bdevs_discovered": 1, 00:16:24.469 "num_base_bdevs_operational": 1, 00:16:24.469 "base_bdevs_list": [ 00:16:24.469 { 00:16:24.469 "name": null, 00:16:24.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.469 "is_configured": false, 00:16:24.469 "data_offset": 0, 00:16:24.469 "data_size": 7936 00:16:24.469 }, 00:16:24.469 { 00:16:24.469 "name": "pt2", 00:16:24.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.469 "is_configured": true, 00:16:24.469 "data_offset": 256, 00:16:24.469 "data_size": 7936 00:16:24.469 } 00:16:24.469 ] 00:16:24.469 }' 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.469 15:32:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.040 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.041 [2024-11-26 15:32:23.224895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.041 [2024-11-26 15:32:23.224961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.041 [2024-11-26 15:32:23.225036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.041 [2024-11-26 15:32:23.225104] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.041 [2024-11-26 15:32:23.225175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.041 [2024-11-26 15:32:23.288914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:25.041 [2024-11-26 15:32:23.288960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.041 [2024-11-26 15:32:23.288973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:25.041 [2024-11-26 15:32:23.288983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.041 [2024-11-26 15:32:23.291126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.041 [2024-11-26 15:32:23.291162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:25.041 [2024-11-26 15:32:23.291226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:25.041 [2024-11-26 15:32:23.291257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.041 [2024-11-26 15:32:23.291304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:25.041 [2024-11-26 15:32:23.291314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:25.041 [2024-11-26 15:32:23.291396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:25.041 [2024-11-26 15:32:23.291456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:25.041 [2024-11-26 15:32:23.291463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:25.041 [2024-11-26 15:32:23.291511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.041 pt2 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.041 "name": "raid_bdev1", 00:16:25.041 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:25.041 "strip_size_kb": 0, 00:16:25.041 "state": "online", 00:16:25.041 "raid_level": "raid1", 00:16:25.041 "superblock": true, 00:16:25.041 "num_base_bdevs": 2, 00:16:25.041 "num_base_bdevs_discovered": 1, 00:16:25.041 "num_base_bdevs_operational": 1, 00:16:25.041 "base_bdevs_list": [ 00:16:25.041 { 00:16:25.041 "name": null, 00:16:25.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.041 "is_configured": false, 00:16:25.041 "data_offset": 256, 00:16:25.041 "data_size": 7936 00:16:25.041 }, 00:16:25.041 { 00:16:25.041 "name": "pt2", 00:16:25.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.041 "is_configured": true, 00:16:25.041 "data_offset": 256, 00:16:25.041 "data_size": 7936 00:16:25.041 } 00:16:25.041 ] 00:16:25.041 }' 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.041 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.308 [2024-11-26 15:32:23.633005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.308 [2024-11-26 15:32:23.633077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.308 [2024-11-26 15:32:23.633144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.308 [2024-11-26 15:32:23.633207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.308 [2024-11-26 15:32:23.633239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.308 [2024-11-26 15:32:23.697035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:25.308 [2024-11-26 15:32:23.697118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.308 [2024-11-26 15:32:23.697152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:25.308 [2024-11-26 15:32:23.697186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.308 [2024-11-26 15:32:23.699303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.308 [2024-11-26 15:32:23.699366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:25.308 [2024-11-26 15:32:23.699426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:25.308 [2024-11-26 15:32:23.699466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:25.308 [2024-11-26 15:32:23.699577] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:25.308 [2024-11-26 15:32:23.699643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.308 [2024-11-26 15:32:23.699690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:16:25.308 [2024-11-26 15:32:23.699751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.308 [2024-11-26 15:32:23.699853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:25.308 [2024-11-26 15:32:23.699892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:25.308 [2024-11-26 15:32:23.699962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:25.308 [2024-11-26 15:32:23.700050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:25.308 [2024-11-26 15:32:23.700089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:25.308 [2024-11-26 15:32:23.700192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.308 pt1 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.308 "name": "raid_bdev1", 00:16:25.308 "uuid": "a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1", 00:16:25.308 "strip_size_kb": 0, 00:16:25.308 "state": "online", 00:16:25.308 "raid_level": "raid1", 00:16:25.308 "superblock": true, 00:16:25.308 "num_base_bdevs": 2, 00:16:25.308 "num_base_bdevs_discovered": 1, 00:16:25.308 "num_base_bdevs_operational": 1, 00:16:25.308 "base_bdevs_list": [ 00:16:25.308 { 00:16:25.308 "name": null, 00:16:25.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.308 "is_configured": false, 00:16:25.308 "data_offset": 256, 00:16:25.308 "data_size": 7936 00:16:25.308 }, 00:16:25.308 { 00:16:25.308 "name": "pt2", 00:16:25.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.308 "is_configured": true, 00:16:25.308 "data_offset": 256, 00:16:25.308 "data_size": 7936 00:16:25.308 } 00:16:25.308 ] 00:16:25.308 }' 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.308 15:32:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.923 [2024-11-26 15:32:24.229414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1 '!=' a2f5cc73-0c8e-415c-a9a7-7a8d3d11dbc1 ']' 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 100533 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100533 ']' 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100533 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100533 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.923 killing process with pid 100533 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100533' 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 100533 00:16:25.923 [2024-11-26 15:32:24.302699] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.923 [2024-11-26 15:32:24.302775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.923 [2024-11-26 15:32:24.302816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.923 [2024-11-26 15:32:24.302828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:25.923 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 100533 00:16:25.923 [2024-11-26 15:32:24.346820] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.493 ************************************ 00:16:26.493 END TEST raid_superblock_test_md_interleaved 00:16:26.493 ************************************ 00:16:26.493 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:26.493 00:16:26.493 real 0m4.888s 00:16:26.493 user 0m7.763s 00:16:26.493 sys 0m1.146s 00:16:26.493 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.494 15:32:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.494 15:32:24 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:26.494 15:32:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:26.494 15:32:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.494 15:32:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.494 ************************************ 00:16:26.494 START TEST raid_rebuild_test_sb_md_interleaved 00:16:26.494 ************************************ 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=100850 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 100850 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100850 ']' 00:16:26.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.494 15:32:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.494 [2024-11-26 15:32:24.865826] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:26.494 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:26.494 Zero copy mechanism will not be used. 00:16:26.494 [2024-11-26 15:32:24.866004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100850 ] 00:16:26.753 [2024-11-26 15:32:25.006622] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:26.753 [2024-11-26 15:32:25.046510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.753 [2024-11-26 15:32:25.088416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.753 [2024-11-26 15:32:25.165203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.753 [2024-11-26 15:32:25.165261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.322 BaseBdev1_malloc 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.322 [2024-11-26 15:32:25.705273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:27.322 [2024-11-26 15:32:25.705349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.322 [2024-11-26 15:32:25.705385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:27.322 [2024-11-26 15:32:25.705403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.322 [2024-11-26 15:32:25.707712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.322 [2024-11-26 15:32:25.707815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:27.322 BaseBdev1 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.322 BaseBdev2_malloc 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.322 [2024-11-26 15:32:25.736357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:27.322 [2024-11-26 15:32:25.736413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.322 [2024-11-26 15:32:25.736438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:27.322 [2024-11-26 15:32:25.736449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.322 [2024-11-26 15:32:25.738643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.322 [2024-11-26 15:32:25.738679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:27.322 BaseBdev2 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.322 spare_malloc 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.322 spare_delay 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.322 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.322 [2024-11-26 15:32:25.779361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:27.322 [2024-11-26 15:32:25.779415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.322 [2024-11-26 15:32:25.779434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:27.322 [2024-11-26 15:32:25.779447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.322 [2024-11-26 15:32:25.781671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.322 [2024-11-26 15:32:25.781768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:27.322 spare 00:16:27.323 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.323 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:27.323 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.323 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.323 [2024-11-26 15:32:25.791423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.323 [2024-11-26 15:32:25.793634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.323 [2024-11-26 15:32:25.793798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:27.323 [2024-11-26 15:32:25.793813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:27.323 [2024-11-26 15:32:25.793904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:27.323 [2024-11-26 15:32:25.793991] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:27.323 [2024-11-26 15:32:25.794001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:27.323 [2024-11-26 15:32:25.794066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.583 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.584 "name": "raid_bdev1", 00:16:27.584 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:27.584 "strip_size_kb": 0, 00:16:27.584 "state": "online", 00:16:27.584 "raid_level": "raid1", 00:16:27.584 "superblock": true, 00:16:27.584 "num_base_bdevs": 2, 00:16:27.584 "num_base_bdevs_discovered": 2, 00:16:27.584 "num_base_bdevs_operational": 2, 00:16:27.584 "base_bdevs_list": [ 00:16:27.584 { 00:16:27.584 "name": "BaseBdev1", 00:16:27.584 "uuid": "fbf8173f-d569-5b37-aa9c-50e7171f9293", 00:16:27.584 "is_configured": true, 00:16:27.584 "data_offset": 256, 00:16:27.584 "data_size": 7936 00:16:27.584 }, 00:16:27.584 { 00:16:27.584 "name": "BaseBdev2", 00:16:27.584 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:27.584 "is_configured": true, 00:16:27.584 "data_offset": 256, 00:16:27.584 "data_size": 7936 00:16:27.584 } 00:16:27.584 ] 00:16:27.584 }' 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.584 15:32:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.844 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.844 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.844 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.844 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:27.844 [2024-11-26 15:32:26.287817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.844 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.103 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:28.103 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.104 [2024-11-26 15:32:26.383545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.104 "name": "raid_bdev1", 00:16:28.104 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:28.104 "strip_size_kb": 0, 00:16:28.104 "state": "online", 00:16:28.104 "raid_level": "raid1", 00:16:28.104 "superblock": true, 00:16:28.104 "num_base_bdevs": 2, 00:16:28.104 "num_base_bdevs_discovered": 1, 00:16:28.104 "num_base_bdevs_operational": 1, 00:16:28.104 "base_bdevs_list": [ 00:16:28.104 { 00:16:28.104 "name": null, 00:16:28.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.104 "is_configured": false, 00:16:28.104 "data_offset": 0, 00:16:28.104 "data_size": 7936 00:16:28.104 }, 00:16:28.104 { 00:16:28.104 "name": "BaseBdev2", 00:16:28.104 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:28.104 "is_configured": true, 00:16:28.104 "data_offset": 256, 00:16:28.104 "data_size": 7936 00:16:28.104 } 00:16:28.104 ] 00:16:28.104 }' 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.104 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.364 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.364 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.364 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.364 [2024-11-26 15:32:26.791677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.364 [2024-11-26 15:32:26.809827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:28.364 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.364 15:32:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:28.364 [2024-11-26 15:32:26.815674] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.744 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.745 "name": "raid_bdev1", 00:16:29.745 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:29.745 "strip_size_kb": 0, 00:16:29.745 "state": "online", 00:16:29.745 "raid_level": "raid1", 00:16:29.745 "superblock": true, 00:16:29.745 "num_base_bdevs": 2, 00:16:29.745 "num_base_bdevs_discovered": 2, 00:16:29.745 "num_base_bdevs_operational": 2, 00:16:29.745 "process": { 00:16:29.745 "type": "rebuild", 00:16:29.745 "target": "spare", 00:16:29.745 "progress": { 00:16:29.745 "blocks": 2560, 00:16:29.745 "percent": 32 00:16:29.745 } 00:16:29.745 }, 00:16:29.745 "base_bdevs_list": [ 00:16:29.745 { 00:16:29.745 "name": "spare", 00:16:29.745 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:29.745 "is_configured": true, 00:16:29.745 "data_offset": 256, 00:16:29.745 "data_size": 7936 00:16:29.745 }, 00:16:29.745 { 00:16:29.745 "name": "BaseBdev2", 00:16:29.745 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:29.745 "is_configured": true, 00:16:29.745 "data_offset": 256, 00:16:29.745 "data_size": 7936 00:16:29.745 } 00:16:29.745 ] 00:16:29.745 }' 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.745 15:32:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.745 [2024-11-26 15:32:27.970271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.745 [2024-11-26 15:32:28.027009] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.745 [2024-11-26 15:32:28.027069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.745 [2024-11-26 15:32:28.027083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.745 [2024-11-26 15:32:28.027109] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.745 "name": "raid_bdev1", 00:16:29.745 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:29.745 "strip_size_kb": 0, 00:16:29.745 "state": "online", 00:16:29.745 "raid_level": "raid1", 00:16:29.745 "superblock": true, 00:16:29.745 "num_base_bdevs": 2, 00:16:29.745 "num_base_bdevs_discovered": 1, 00:16:29.745 "num_base_bdevs_operational": 1, 00:16:29.745 "base_bdevs_list": [ 00:16:29.745 { 00:16:29.745 "name": null, 00:16:29.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.745 "is_configured": false, 00:16:29.745 "data_offset": 0, 00:16:29.745 "data_size": 7936 00:16:29.745 }, 00:16:29.745 { 00:16:29.745 "name": "BaseBdev2", 00:16:29.745 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:29.745 "is_configured": true, 00:16:29.745 "data_offset": 256, 00:16:29.745 "data_size": 7936 00:16:29.745 } 00:16:29.745 ] 00:16:29.745 }' 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.745 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.314 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.314 "name": "raid_bdev1", 00:16:30.314 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:30.314 "strip_size_kb": 0, 00:16:30.314 "state": "online", 00:16:30.314 "raid_level": "raid1", 00:16:30.314 "superblock": true, 00:16:30.314 "num_base_bdevs": 2, 00:16:30.314 "num_base_bdevs_discovered": 1, 00:16:30.314 "num_base_bdevs_operational": 1, 00:16:30.314 "base_bdevs_list": [ 00:16:30.314 { 00:16:30.314 "name": null, 00:16:30.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.314 "is_configured": false, 00:16:30.314 "data_offset": 0, 00:16:30.315 "data_size": 7936 00:16:30.315 }, 00:16:30.315 { 00:16:30.315 "name": "BaseBdev2", 00:16:30.315 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:30.315 "is_configured": true, 00:16:30.315 "data_offset": 256, 00:16:30.315 "data_size": 7936 00:16:30.315 } 00:16:30.315 ] 00:16:30.315 }' 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.315 [2024-11-26 15:32:28.649825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:30.315 [2024-11-26 15:32:28.654955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.315 15:32:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:30.315 [2024-11-26 15:32:28.657132] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.258 "name": "raid_bdev1", 00:16:31.258 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:31.258 "strip_size_kb": 0, 00:16:31.258 "state": "online", 00:16:31.258 "raid_level": "raid1", 00:16:31.258 "superblock": true, 00:16:31.258 "num_base_bdevs": 2, 00:16:31.258 "num_base_bdevs_discovered": 2, 00:16:31.258 "num_base_bdevs_operational": 2, 00:16:31.258 "process": { 00:16:31.258 "type": "rebuild", 00:16:31.258 "target": "spare", 00:16:31.258 "progress": { 00:16:31.258 "blocks": 2560, 00:16:31.258 "percent": 32 00:16:31.258 } 00:16:31.258 }, 00:16:31.258 "base_bdevs_list": [ 00:16:31.258 { 00:16:31.258 "name": "spare", 00:16:31.258 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:31.258 "is_configured": true, 00:16:31.258 "data_offset": 256, 00:16:31.258 "data_size": 7936 00:16:31.258 }, 00:16:31.258 { 00:16:31.258 "name": "BaseBdev2", 00:16:31.258 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:31.258 "is_configured": true, 00:16:31.258 "data_offset": 256, 00:16:31.258 "data_size": 7936 00:16:31.258 } 00:16:31.258 ] 00:16:31.258 }' 00:16:31.258 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.518 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.518 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.518 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.518 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:31.518 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:31.519 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=608 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.519 "name": "raid_bdev1", 00:16:31.519 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:31.519 "strip_size_kb": 0, 00:16:31.519 "state": "online", 00:16:31.519 "raid_level": "raid1", 00:16:31.519 "superblock": true, 00:16:31.519 "num_base_bdevs": 2, 00:16:31.519 "num_base_bdevs_discovered": 2, 00:16:31.519 "num_base_bdevs_operational": 2, 00:16:31.519 "process": { 00:16:31.519 "type": "rebuild", 00:16:31.519 "target": "spare", 00:16:31.519 "progress": { 00:16:31.519 "blocks": 2816, 00:16:31.519 "percent": 35 00:16:31.519 } 00:16:31.519 }, 00:16:31.519 "base_bdevs_list": [ 00:16:31.519 { 00:16:31.519 "name": "spare", 00:16:31.519 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:31.519 "is_configured": true, 00:16:31.519 "data_offset": 256, 00:16:31.519 "data_size": 7936 00:16:31.519 }, 00:16:31.519 { 00:16:31.519 "name": "BaseBdev2", 00:16:31.519 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:31.519 "is_configured": true, 00:16:31.519 "data_offset": 256, 00:16:31.519 "data_size": 7936 00:16:31.519 } 00:16:31.519 ] 00:16:31.519 }' 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.519 15:32:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.901 "name": "raid_bdev1", 00:16:32.901 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:32.901 "strip_size_kb": 0, 00:16:32.901 "state": "online", 00:16:32.901 "raid_level": "raid1", 00:16:32.901 "superblock": true, 00:16:32.901 "num_base_bdevs": 2, 00:16:32.901 "num_base_bdevs_discovered": 2, 00:16:32.901 "num_base_bdevs_operational": 2, 00:16:32.901 "process": { 00:16:32.901 "type": "rebuild", 00:16:32.901 "target": "spare", 00:16:32.901 "progress": { 00:16:32.901 "blocks": 5632, 00:16:32.901 "percent": 70 00:16:32.901 } 00:16:32.901 }, 00:16:32.901 "base_bdevs_list": [ 00:16:32.901 { 00:16:32.901 "name": "spare", 00:16:32.901 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:32.901 "is_configured": true, 00:16:32.901 "data_offset": 256, 00:16:32.901 "data_size": 7936 00:16:32.901 }, 00:16:32.901 { 00:16:32.901 "name": "BaseBdev2", 00:16:32.901 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:32.901 "is_configured": true, 00:16:32.901 "data_offset": 256, 00:16:32.901 "data_size": 7936 00:16:32.901 } 00:16:32.901 ] 00:16:32.901 }' 00:16:32.901 15:32:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.901 15:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.901 15:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.901 15:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.901 15:32:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.471 [2024-11-26 15:32:31.782524] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:33.471 [2024-11-26 15:32:31.782614] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:33.471 [2024-11-26 15:32:31.782718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.732 "name": "raid_bdev1", 00:16:33.732 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:33.732 "strip_size_kb": 0, 00:16:33.732 "state": "online", 00:16:33.732 "raid_level": "raid1", 00:16:33.732 "superblock": true, 00:16:33.732 "num_base_bdevs": 2, 00:16:33.732 "num_base_bdevs_discovered": 2, 00:16:33.732 "num_base_bdevs_operational": 2, 00:16:33.732 "base_bdevs_list": [ 00:16:33.732 { 00:16:33.732 "name": "spare", 00:16:33.732 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:33.732 "is_configured": true, 00:16:33.732 "data_offset": 256, 00:16:33.732 "data_size": 7936 00:16:33.732 }, 00:16:33.732 { 00:16:33.732 "name": "BaseBdev2", 00:16:33.732 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:33.732 "is_configured": true, 00:16:33.732 "data_offset": 256, 00:16:33.732 "data_size": 7936 00:16:33.732 } 00:16:33.732 ] 00:16:33.732 }' 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:33.732 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.992 "name": "raid_bdev1", 00:16:33.992 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:33.992 "strip_size_kb": 0, 00:16:33.992 "state": "online", 00:16:33.992 "raid_level": "raid1", 00:16:33.992 "superblock": true, 00:16:33.992 "num_base_bdevs": 2, 00:16:33.992 "num_base_bdevs_discovered": 2, 00:16:33.992 "num_base_bdevs_operational": 2, 00:16:33.992 "base_bdevs_list": [ 00:16:33.992 { 00:16:33.992 "name": "spare", 00:16:33.992 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:33.992 "is_configured": true, 00:16:33.992 "data_offset": 256, 00:16:33.992 "data_size": 7936 00:16:33.992 }, 00:16:33.992 { 00:16:33.992 "name": "BaseBdev2", 00:16:33.992 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:33.992 "is_configured": true, 00:16:33.992 "data_offset": 256, 00:16:33.992 "data_size": 7936 00:16:33.992 } 00:16:33.992 ] 00:16:33.992 }' 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.992 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.993 "name": "raid_bdev1", 00:16:33.993 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:33.993 "strip_size_kb": 0, 00:16:33.993 "state": "online", 00:16:33.993 "raid_level": "raid1", 00:16:33.993 "superblock": true, 00:16:33.993 "num_base_bdevs": 2, 00:16:33.993 "num_base_bdevs_discovered": 2, 00:16:33.993 "num_base_bdevs_operational": 2, 00:16:33.993 "base_bdevs_list": [ 00:16:33.993 { 00:16:33.993 "name": "spare", 00:16:33.993 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:33.993 "is_configured": true, 00:16:33.993 "data_offset": 256, 00:16:33.993 "data_size": 7936 00:16:33.993 }, 00:16:33.993 { 00:16:33.993 "name": "BaseBdev2", 00:16:33.993 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:33.993 "is_configured": true, 00:16:33.993 "data_offset": 256, 00:16:33.993 "data_size": 7936 00:16:33.993 } 00:16:33.993 ] 00:16:33.993 }' 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.993 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.563 [2024-11-26 15:32:32.804445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.563 [2024-11-26 15:32:32.804479] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.563 [2024-11-26 15:32:32.804596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.563 [2024-11-26 15:32:32.804669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.563 [2024-11-26 15:32:32.804680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.563 [2024-11-26 15:32:32.876463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.563 [2024-11-26 15:32:32.876585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.563 [2024-11-26 15:32:32.876627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:34.563 [2024-11-26 15:32:32.876655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.563 [2024-11-26 15:32:32.878793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.563 [2024-11-26 15:32:32.878865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.563 [2024-11-26 15:32:32.878935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:34.563 [2024-11-26 15:32:32.879010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.563 [2024-11-26 15:32:32.879136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.563 spare 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.563 [2024-11-26 15:32:32.979250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:34.563 [2024-11-26 15:32:32.979318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:34.563 [2024-11-26 15:32:32.979433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:16:34.563 [2024-11-26 15:32:32.979517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:34.563 [2024-11-26 15:32:32.979526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:34.563 [2024-11-26 15:32:32.979598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.563 15:32:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.564 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.564 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.564 "name": "raid_bdev1", 00:16:34.564 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:34.564 "strip_size_kb": 0, 00:16:34.564 "state": "online", 00:16:34.564 "raid_level": "raid1", 00:16:34.564 "superblock": true, 00:16:34.564 "num_base_bdevs": 2, 00:16:34.564 "num_base_bdevs_discovered": 2, 00:16:34.564 "num_base_bdevs_operational": 2, 00:16:34.564 "base_bdevs_list": [ 00:16:34.564 { 00:16:34.564 "name": "spare", 00:16:34.564 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:34.564 "is_configured": true, 00:16:34.564 "data_offset": 256, 00:16:34.564 "data_size": 7936 00:16:34.564 }, 00:16:34.564 { 00:16:34.564 "name": "BaseBdev2", 00:16:34.564 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:34.564 "is_configured": true, 00:16:34.564 "data_offset": 256, 00:16:34.564 "data_size": 7936 00:16:34.564 } 00:16:34.564 ] 00:16:34.564 }' 00:16:34.564 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.564 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.135 "name": "raid_bdev1", 00:16:35.135 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:35.135 "strip_size_kb": 0, 00:16:35.135 "state": "online", 00:16:35.135 "raid_level": "raid1", 00:16:35.135 "superblock": true, 00:16:35.135 "num_base_bdevs": 2, 00:16:35.135 "num_base_bdevs_discovered": 2, 00:16:35.135 "num_base_bdevs_operational": 2, 00:16:35.135 "base_bdevs_list": [ 00:16:35.135 { 00:16:35.135 "name": "spare", 00:16:35.135 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:35.135 "is_configured": true, 00:16:35.135 "data_offset": 256, 00:16:35.135 "data_size": 7936 00:16:35.135 }, 00:16:35.135 { 00:16:35.135 "name": "BaseBdev2", 00:16:35.135 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:35.135 "is_configured": true, 00:16:35.135 "data_offset": 256, 00:16:35.135 "data_size": 7936 00:16:35.135 } 00:16:35.135 ] 00:16:35.135 }' 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.135 [2024-11-26 15:32:33.584703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.135 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.395 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.395 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.395 "name": "raid_bdev1", 00:16:35.395 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:35.395 "strip_size_kb": 0, 00:16:35.395 "state": "online", 00:16:35.395 "raid_level": "raid1", 00:16:35.395 "superblock": true, 00:16:35.395 "num_base_bdevs": 2, 00:16:35.395 "num_base_bdevs_discovered": 1, 00:16:35.395 "num_base_bdevs_operational": 1, 00:16:35.395 "base_bdevs_list": [ 00:16:35.395 { 00:16:35.395 "name": null, 00:16:35.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.395 "is_configured": false, 00:16:35.395 "data_offset": 0, 00:16:35.395 "data_size": 7936 00:16:35.396 }, 00:16:35.396 { 00:16:35.396 "name": "BaseBdev2", 00:16:35.396 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:35.396 "is_configured": true, 00:16:35.396 "data_offset": 256, 00:16:35.396 "data_size": 7936 00:16:35.396 } 00:16:35.396 ] 00:16:35.396 }' 00:16:35.396 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.396 15:32:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.656 15:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.656 15:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.656 15:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.656 [2024-11-26 15:32:34.032862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.656 [2024-11-26 15:32:34.033062] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.656 [2024-11-26 15:32:34.033145] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.656 [2024-11-26 15:32:34.033218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.656 [2024-11-26 15:32:34.039568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:35.656 15:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.656 15:32:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:35.656 [2024-11-26 15:32:34.041788] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.596 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.856 "name": "raid_bdev1", 00:16:36.856 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:36.856 "strip_size_kb": 0, 00:16:36.856 "state": "online", 00:16:36.856 "raid_level": "raid1", 00:16:36.856 "superblock": true, 00:16:36.856 "num_base_bdevs": 2, 00:16:36.856 "num_base_bdevs_discovered": 2, 00:16:36.856 "num_base_bdevs_operational": 2, 00:16:36.856 "process": { 00:16:36.856 "type": "rebuild", 00:16:36.856 "target": "spare", 00:16:36.856 "progress": { 00:16:36.856 "blocks": 2560, 00:16:36.856 "percent": 32 00:16:36.856 } 00:16:36.856 }, 00:16:36.856 "base_bdevs_list": [ 00:16:36.856 { 00:16:36.856 "name": "spare", 00:16:36.856 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:36.856 "is_configured": true, 00:16:36.856 "data_offset": 256, 00:16:36.856 "data_size": 7936 00:16:36.856 }, 00:16:36.856 { 00:16:36.856 "name": "BaseBdev2", 00:16:36.856 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:36.856 "is_configured": true, 00:16:36.856 "data_offset": 256, 00:16:36.856 "data_size": 7936 00:16:36.856 } 00:16:36.856 ] 00:16:36.856 }' 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.856 [2024-11-26 15:32:35.202263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.856 [2024-11-26 15:32:35.251697] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:36.856 [2024-11-26 15:32:35.251804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.856 [2024-11-26 15:32:35.251820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.856 [2024-11-26 15:32:35.251830] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.856 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.857 "name": "raid_bdev1", 00:16:36.857 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:36.857 "strip_size_kb": 0, 00:16:36.857 "state": "online", 00:16:36.857 "raid_level": "raid1", 00:16:36.857 "superblock": true, 00:16:36.857 "num_base_bdevs": 2, 00:16:36.857 "num_base_bdevs_discovered": 1, 00:16:36.857 "num_base_bdevs_operational": 1, 00:16:36.857 "base_bdevs_list": [ 00:16:36.857 { 00:16:36.857 "name": null, 00:16:36.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.857 "is_configured": false, 00:16:36.857 "data_offset": 0, 00:16:36.857 "data_size": 7936 00:16:36.857 }, 00:16:36.857 { 00:16:36.857 "name": "BaseBdev2", 00:16:36.857 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:36.857 "is_configured": true, 00:16:36.857 "data_offset": 256, 00:16:36.857 "data_size": 7936 00:16:36.857 } 00:16:36.857 ] 00:16:36.857 }' 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.857 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.427 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:37.427 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.427 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.427 [2024-11-26 15:32:35.742192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:37.427 [2024-11-26 15:32:35.742302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.427 [2024-11-26 15:32:35.742345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:37.427 [2024-11-26 15:32:35.742375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.427 [2024-11-26 15:32:35.742592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.427 [2024-11-26 15:32:35.742646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:37.427 [2024-11-26 15:32:35.742724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:37.427 [2024-11-26 15:32:35.742764] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.427 [2024-11-26 15:32:35.742823] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:37.427 [2024-11-26 15:32:35.742888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.427 [2024-11-26 15:32:35.747757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:16:37.427 spare 00:16:37.427 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.427 15:32:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:37.427 [2024-11-26 15:32:35.749975] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.366 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.367 "name": "raid_bdev1", 00:16:38.367 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:38.367 "strip_size_kb": 0, 00:16:38.367 "state": "online", 00:16:38.367 "raid_level": "raid1", 00:16:38.367 "superblock": true, 00:16:38.367 "num_base_bdevs": 2, 00:16:38.367 "num_base_bdevs_discovered": 2, 00:16:38.367 "num_base_bdevs_operational": 2, 00:16:38.367 "process": { 00:16:38.367 "type": "rebuild", 00:16:38.367 "target": "spare", 00:16:38.367 "progress": { 00:16:38.367 "blocks": 2560, 00:16:38.367 "percent": 32 00:16:38.367 } 00:16:38.367 }, 00:16:38.367 "base_bdevs_list": [ 00:16:38.367 { 00:16:38.367 "name": "spare", 00:16:38.367 "uuid": "19e13ec4-fc24-5ef0-817a-8a5d90c840df", 00:16:38.367 "is_configured": true, 00:16:38.367 "data_offset": 256, 00:16:38.367 "data_size": 7936 00:16:38.367 }, 00:16:38.367 { 00:16:38.367 "name": "BaseBdev2", 00:16:38.367 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:38.367 "is_configured": true, 00:16:38.367 "data_offset": 256, 00:16:38.367 "data_size": 7936 00:16:38.367 } 00:16:38.367 ] 00:16:38.367 }' 00:16:38.367 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.628 [2024-11-26 15:32:36.903873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.628 [2024-11-26 15:32:36.959806] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.628 [2024-11-26 15:32:36.959907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.628 [2024-11-26 15:32:36.959944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.628 [2024-11-26 15:32:36.959964] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.628 15:32:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.628 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.628 "name": "raid_bdev1", 00:16:38.628 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:38.628 "strip_size_kb": 0, 00:16:38.628 "state": "online", 00:16:38.628 "raid_level": "raid1", 00:16:38.628 "superblock": true, 00:16:38.628 "num_base_bdevs": 2, 00:16:38.628 "num_base_bdevs_discovered": 1, 00:16:38.628 "num_base_bdevs_operational": 1, 00:16:38.628 "base_bdevs_list": [ 00:16:38.628 { 00:16:38.628 "name": null, 00:16:38.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.628 "is_configured": false, 00:16:38.628 "data_offset": 0, 00:16:38.628 "data_size": 7936 00:16:38.628 }, 00:16:38.628 { 00:16:38.628 "name": "BaseBdev2", 00:16:38.628 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:38.628 "is_configured": true, 00:16:38.628 "data_offset": 256, 00:16:38.628 "data_size": 7936 00:16:38.628 } 00:16:38.628 ] 00:16:38.628 }' 00:16:38.628 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.628 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.197 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.198 "name": "raid_bdev1", 00:16:39.198 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:39.198 "strip_size_kb": 0, 00:16:39.198 "state": "online", 00:16:39.198 "raid_level": "raid1", 00:16:39.198 "superblock": true, 00:16:39.198 "num_base_bdevs": 2, 00:16:39.198 "num_base_bdevs_discovered": 1, 00:16:39.198 "num_base_bdevs_operational": 1, 00:16:39.198 "base_bdevs_list": [ 00:16:39.198 { 00:16:39.198 "name": null, 00:16:39.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.198 "is_configured": false, 00:16:39.198 "data_offset": 0, 00:16:39.198 "data_size": 7936 00:16:39.198 }, 00:16:39.198 { 00:16:39.198 "name": "BaseBdev2", 00:16:39.198 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:39.198 "is_configured": true, 00:16:39.198 "data_offset": 256, 00:16:39.198 "data_size": 7936 00:16:39.198 } 00:16:39.198 ] 00:16:39.198 }' 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.198 [2024-11-26 15:32:37.610335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:39.198 [2024-11-26 15:32:37.610392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.198 [2024-11-26 15:32:37.610415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:39.198 [2024-11-26 15:32:37.610425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.198 [2024-11-26 15:32:37.610601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.198 [2024-11-26 15:32:37.610613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:39.198 [2024-11-26 15:32:37.610659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:39.198 [2024-11-26 15:32:37.610672] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.198 [2024-11-26 15:32:37.610686] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.198 [2024-11-26 15:32:37.610695] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:39.198 BaseBdev1 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.198 15:32:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.579 "name": "raid_bdev1", 00:16:40.579 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:40.579 "strip_size_kb": 0, 00:16:40.579 "state": "online", 00:16:40.579 "raid_level": "raid1", 00:16:40.579 "superblock": true, 00:16:40.579 "num_base_bdevs": 2, 00:16:40.579 "num_base_bdevs_discovered": 1, 00:16:40.579 "num_base_bdevs_operational": 1, 00:16:40.579 "base_bdevs_list": [ 00:16:40.579 { 00:16:40.579 "name": null, 00:16:40.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.579 "is_configured": false, 00:16:40.579 "data_offset": 0, 00:16:40.579 "data_size": 7936 00:16:40.579 }, 00:16:40.579 { 00:16:40.579 "name": "BaseBdev2", 00:16:40.579 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:40.579 "is_configured": true, 00:16:40.579 "data_offset": 256, 00:16:40.579 "data_size": 7936 00:16:40.579 } 00:16:40.579 ] 00:16:40.579 }' 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.579 15:32:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.579 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.579 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.579 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.579 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.579 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.839 "name": "raid_bdev1", 00:16:40.839 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:40.839 "strip_size_kb": 0, 00:16:40.839 "state": "online", 00:16:40.839 "raid_level": "raid1", 00:16:40.839 "superblock": true, 00:16:40.839 "num_base_bdevs": 2, 00:16:40.839 "num_base_bdevs_discovered": 1, 00:16:40.839 "num_base_bdevs_operational": 1, 00:16:40.839 "base_bdevs_list": [ 00:16:40.839 { 00:16:40.839 "name": null, 00:16:40.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.839 "is_configured": false, 00:16:40.839 "data_offset": 0, 00:16:40.839 "data_size": 7936 00:16:40.839 }, 00:16:40.839 { 00:16:40.839 "name": "BaseBdev2", 00:16:40.839 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:40.839 "is_configured": true, 00:16:40.839 "data_offset": 256, 00:16:40.839 "data_size": 7936 00:16:40.839 } 00:16:40.839 ] 00:16:40.839 }' 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.839 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.839 [2024-11-26 15:32:39.214738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.839 [2024-11-26 15:32:39.214870] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:40.840 [2024-11-26 15:32:39.214888] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:40.840 request: 00:16:40.840 { 00:16:40.840 "base_bdev": "BaseBdev1", 00:16:40.840 "raid_bdev": "raid_bdev1", 00:16:40.840 "method": "bdev_raid_add_base_bdev", 00:16:40.840 "req_id": 1 00:16:40.840 } 00:16:40.840 Got JSON-RPC error response 00:16:40.840 response: 00:16:40.840 { 00:16:40.840 "code": -22, 00:16:40.840 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:40.840 } 00:16:40.840 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:40.840 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:40.840 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:40.840 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:40.840 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:40.840 15:32:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.780 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.040 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.040 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.040 "name": "raid_bdev1", 00:16:42.040 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:42.040 "strip_size_kb": 0, 00:16:42.040 "state": "online", 00:16:42.040 "raid_level": "raid1", 00:16:42.040 "superblock": true, 00:16:42.040 "num_base_bdevs": 2, 00:16:42.040 "num_base_bdevs_discovered": 1, 00:16:42.040 "num_base_bdevs_operational": 1, 00:16:42.040 "base_bdevs_list": [ 00:16:42.040 { 00:16:42.040 "name": null, 00:16:42.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.040 "is_configured": false, 00:16:42.040 "data_offset": 0, 00:16:42.040 "data_size": 7936 00:16:42.040 }, 00:16:42.040 { 00:16:42.040 "name": "BaseBdev2", 00:16:42.040 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:42.040 "is_configured": true, 00:16:42.040 "data_offset": 256, 00:16:42.040 "data_size": 7936 00:16:42.040 } 00:16:42.040 ] 00:16:42.040 }' 00:16:42.040 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.040 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.300 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.300 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.301 "name": "raid_bdev1", 00:16:42.301 "uuid": "02788180-cf2d-4bd8-863f-92f7c4a316c4", 00:16:42.301 "strip_size_kb": 0, 00:16:42.301 "state": "online", 00:16:42.301 "raid_level": "raid1", 00:16:42.301 "superblock": true, 00:16:42.301 "num_base_bdevs": 2, 00:16:42.301 "num_base_bdevs_discovered": 1, 00:16:42.301 "num_base_bdevs_operational": 1, 00:16:42.301 "base_bdevs_list": [ 00:16:42.301 { 00:16:42.301 "name": null, 00:16:42.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.301 "is_configured": false, 00:16:42.301 "data_offset": 0, 00:16:42.301 "data_size": 7936 00:16:42.301 }, 00:16:42.301 { 00:16:42.301 "name": "BaseBdev2", 00:16:42.301 "uuid": "6ad08e94-9dff-55ed-aee1-503407ecff47", 00:16:42.301 "is_configured": true, 00:16:42.301 "data_offset": 256, 00:16:42.301 "data_size": 7936 00:16:42.301 } 00:16:42.301 ] 00:16:42.301 }' 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.301 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 100850 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100850 ']' 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100850 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100850 00:16:42.561 killing process with pid 100850 00:16:42.561 Received shutdown signal, test time was about 60.000000 seconds 00:16:42.561 00:16:42.561 Latency(us) 00:16:42.561 [2024-11-26T15:32:41.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.561 [2024-11-26T15:32:41.040Z] =================================================================================================================== 00:16:42.561 [2024-11-26T15:32:41.040Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100850' 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100850 00:16:42.561 [2024-11-26 15:32:40.817647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.561 [2024-11-26 15:32:40.817767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.561 15:32:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100850 00:16:42.561 [2024-11-26 15:32:40.817814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.561 [2024-11-26 15:32:40.817827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:42.561 [2024-11-26 15:32:40.880192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.822 15:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:42.822 ************************************ 00:16:42.822 END TEST raid_rebuild_test_sb_md_interleaved 00:16:42.822 ************************************ 00:16:42.822 00:16:42.822 real 0m16.432s 00:16:42.822 user 0m21.821s 00:16:42.822 sys 0m1.834s 00:16:42.822 15:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.822 15:32:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.822 15:32:41 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:42.822 15:32:41 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:42.822 15:32:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 100850 ']' 00:16:42.822 15:32:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 100850 00:16:42.822 15:32:41 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:43.082 00:16:43.082 real 9m50.271s 00:16:43.082 user 13m57.737s 00:16:43.082 sys 1m47.041s 00:16:43.082 ************************************ 00:16:43.082 END TEST bdev_raid 00:16:43.082 ************************************ 00:16:43.082 15:32:41 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.082 15:32:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.082 15:32:41 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:43.082 15:32:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.082 15:32:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.082 15:32:41 -- common/autotest_common.sh@10 -- # set +x 00:16:43.082 ************************************ 00:16:43.082 START TEST spdkcli_raid 00:16:43.082 ************************************ 00:16:43.082 15:32:41 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:43.082 * Looking for test storage... 00:16:43.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:43.082 15:32:41 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:43.082 15:32:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:43.082 15:32:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:43.343 15:32:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:43.343 15:32:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.344 15:32:41 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.344 --rc genhtml_branch_coverage=1 00:16:43.344 --rc genhtml_function_coverage=1 00:16:43.344 --rc genhtml_legend=1 00:16:43.344 --rc geninfo_all_blocks=1 00:16:43.344 --rc geninfo_unexecuted_blocks=1 00:16:43.344 00:16:43.344 ' 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.344 --rc genhtml_branch_coverage=1 00:16:43.344 --rc genhtml_function_coverage=1 00:16:43.344 --rc genhtml_legend=1 00:16:43.344 --rc geninfo_all_blocks=1 00:16:43.344 --rc geninfo_unexecuted_blocks=1 00:16:43.344 00:16:43.344 ' 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.344 --rc genhtml_branch_coverage=1 00:16:43.344 --rc genhtml_function_coverage=1 00:16:43.344 --rc genhtml_legend=1 00:16:43.344 --rc geninfo_all_blocks=1 00:16:43.344 --rc geninfo_unexecuted_blocks=1 00:16:43.344 00:16:43.344 ' 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:43.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.344 --rc genhtml_branch_coverage=1 00:16:43.344 --rc genhtml_function_coverage=1 00:16:43.344 --rc genhtml_legend=1 00:16:43.344 --rc geninfo_all_blocks=1 00:16:43.344 --rc geninfo_unexecuted_blocks=1 00:16:43.344 00:16:43.344 ' 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:43.344 15:32:41 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=101521 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:43.344 15:32:41 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 101521 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 101521 ']' 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.344 15:32:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.344 [2024-11-26 15:32:41.732616] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:43.344 [2024-11-26 15:32:41.732807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101521 ] 00:16:43.605 [2024-11-26 15:32:41.874389] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:43.605 [2024-11-26 15:32:41.913261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.605 [2024-11-26 15:32:41.955653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.605 [2024-11-26 15:32:41.955738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.176 15:32:42 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.176 15:32:42 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:16:44.176 15:32:42 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:44.176 15:32:42 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.176 15:32:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.176 15:32:42 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:44.176 15:32:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.176 15:32:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.176 15:32:42 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:44.176 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:44.176 ' 00:16:46.082 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:46.082 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:46.082 15:32:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:46.082 15:32:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.082 15:32:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.082 15:32:44 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:46.082 15:32:44 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.082 15:32:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.082 15:32:44 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:46.082 ' 00:16:47.017 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:47.017 15:32:45 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:47.017 15:32:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.017 15:32:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.017 15:32:45 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:47.017 15:32:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.017 15:32:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.017 15:32:45 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:47.017 15:32:45 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:47.594 15:32:45 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:47.594 15:32:45 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:47.594 15:32:45 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:47.594 15:32:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.594 15:32:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.594 15:32:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:47.594 15:32:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.594 15:32:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.594 15:32:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:47.594 ' 00:16:48.587 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:48.847 15:32:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:48.847 15:32:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.847 15:32:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.847 15:32:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:48.847 15:32:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:48.847 15:32:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.847 15:32:47 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:48.847 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:48.847 ' 00:16:50.227 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:50.227 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:50.227 15:32:48 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.227 15:32:48 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 101521 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101521 ']' 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101521 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101521 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101521' 00:16:50.227 killing process with pid 101521 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 101521 00:16:50.227 15:32:48 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 101521 00:16:51.168 15:32:49 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:51.168 15:32:49 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 101521 ']' 00:16:51.168 15:32:49 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 101521 00:16:51.168 15:32:49 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101521 ']' 00:16:51.168 15:32:49 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101521 00:16:51.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (101521) - No such process 00:16:51.168 15:32:49 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 101521 is not found' 00:16:51.168 Process with pid 101521 is not found 00:16:51.168 15:32:49 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:51.168 15:32:49 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:51.168 15:32:49 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:51.168 15:32:49 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:51.168 00:16:51.168 real 0m7.934s 00:16:51.168 user 0m16.446s 00:16:51.168 sys 0m1.292s 00:16:51.168 15:32:49 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.168 15:32:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.168 ************************************ 00:16:51.168 END TEST spdkcli_raid 00:16:51.168 ************************************ 00:16:51.168 15:32:49 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:51.168 15:32:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:51.168 15:32:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.168 15:32:49 -- common/autotest_common.sh@10 -- # set +x 00:16:51.168 ************************************ 00:16:51.168 START TEST blockdev_raid5f 00:16:51.168 ************************************ 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:51.168 * Looking for test storage... 00:16:51.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.168 15:32:49 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.168 --rc genhtml_branch_coverage=1 00:16:51.168 --rc genhtml_function_coverage=1 00:16:51.168 --rc genhtml_legend=1 00:16:51.168 --rc geninfo_all_blocks=1 00:16:51.168 --rc geninfo_unexecuted_blocks=1 00:16:51.168 00:16:51.168 ' 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.168 --rc genhtml_branch_coverage=1 00:16:51.168 --rc genhtml_function_coverage=1 00:16:51.168 --rc genhtml_legend=1 00:16:51.168 --rc geninfo_all_blocks=1 00:16:51.168 --rc geninfo_unexecuted_blocks=1 00:16:51.168 00:16:51.168 ' 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.168 --rc genhtml_branch_coverage=1 00:16:51.168 --rc genhtml_function_coverage=1 00:16:51.168 --rc genhtml_legend=1 00:16:51.168 --rc geninfo_all_blocks=1 00:16:51.168 --rc geninfo_unexecuted_blocks=1 00:16:51.168 00:16:51.168 ' 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.168 --rc genhtml_branch_coverage=1 00:16:51.168 --rc genhtml_function_coverage=1 00:16:51.168 --rc genhtml_legend=1 00:16:51.168 --rc geninfo_all_blocks=1 00:16:51.168 --rc geninfo_unexecuted_blocks=1 00:16:51.168 00:16:51.168 ' 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=101783 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:51.168 15:32:49 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 101783 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 101783 ']' 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.168 15:32:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:51.429 [2024-11-26 15:32:49.727685] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:51.429 [2024-11-26 15:32:49.727905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101783 ] 00:16:51.429 [2024-11-26 15:32:49.863223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:51.429 [2024-11-26 15:32:49.900695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.689 [2024-11-26 15:32:49.941026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.260 Malloc0 00:16:52.260 Malloc1 00:16:52.260 Malloc2 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:52.260 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.260 15:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.521 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:52.521 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:52.521 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f1ffcede-d8e6-49a8-8db3-b5395fdaa7e1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f1ffcede-d8e6-49a8-8db3-b5395fdaa7e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f1ffcede-d8e6-49a8-8db3-b5395fdaa7e1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ce1bd0c3-5e07-4161-adbd-6fa3bb915812",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6cd6f16c-db2e-45c5-9518-3d334298159a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0d4395c3-ba22-4a45-8f8d-b5c7e8383855",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:52.521 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:52.521 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:52.521 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:52.521 15:32:50 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 101783 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 101783 ']' 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 101783 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101783 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.521 killing process with pid 101783 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101783' 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 101783 00:16:52.521 15:32:50 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 101783 00:16:53.092 15:32:51 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:53.092 15:32:51 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:53.092 15:32:51 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:53.092 15:32:51 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.092 15:32:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:53.092 ************************************ 00:16:53.092 START TEST bdev_hello_world 00:16:53.092 ************************************ 00:16:53.092 15:32:51 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:53.352 [2024-11-26 15:32:51.605213] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:53.352 [2024-11-26 15:32:51.605403] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101827 ] 00:16:53.352 [2024-11-26 15:32:51.743755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:53.352 [2024-11-26 15:32:51.780765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.352 [2024-11-26 15:32:51.822335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.612 [2024-11-26 15:32:52.072858] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:53.612 [2024-11-26 15:32:52.072982] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:53.612 [2024-11-26 15:32:52.073016] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:53.612 [2024-11-26 15:32:52.073399] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:53.612 [2024-11-26 15:32:52.073588] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:53.613 [2024-11-26 15:32:52.073642] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:53.613 [2024-11-26 15:32:52.073708] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:53.613 00:16:53.613 [2024-11-26 15:32:52.073760] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:54.183 00:16:54.183 real 0m0.925s 00:16:54.183 user 0m0.518s 00:16:54.183 sys 0m0.300s 00:16:54.183 15:32:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.183 15:32:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 ************************************ 00:16:54.183 END TEST bdev_hello_world 00:16:54.183 ************************************ 00:16:54.183 15:32:52 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:54.183 15:32:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:54.183 15:32:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.183 15:32:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 ************************************ 00:16:54.183 START TEST bdev_bounds 00:16:54.183 ************************************ 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:54.183 Process bdevio pid: 101854 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=101854 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 101854' 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 101854 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 101854 ']' 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.183 15:32:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 [2024-11-26 15:32:52.611459] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:54.183 [2024-11-26 15:32:52.611672] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101854 ] 00:16:54.443 [2024-11-26 15:32:52.749086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:54.443 [2024-11-26 15:32:52.790147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.443 [2024-11-26 15:32:52.834800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.443 [2024-11-26 15:32:52.835006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.443 [2024-11-26 15:32:52.835083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.013 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.013 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:55.013 15:32:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:55.272 I/O targets: 00:16:55.272 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:55.272 00:16:55.272 00:16:55.272 CUnit - A unit testing framework for C - Version 2.1-3 00:16:55.272 http://cunit.sourceforge.net/ 00:16:55.272 00:16:55.272 00:16:55.272 Suite: bdevio tests on: raid5f 00:16:55.272 Test: blockdev write read block ...passed 00:16:55.272 Test: blockdev write zeroes read block ...passed 00:16:55.272 Test: blockdev write zeroes read no split ...passed 00:16:55.272 Test: blockdev write zeroes read split ...passed 00:16:55.272 Test: blockdev write zeroes read split partial ...passed 00:16:55.272 Test: blockdev reset ...passed 00:16:55.272 Test: blockdev write read 8 blocks ...passed 00:16:55.272 Test: blockdev write read size > 128k ...passed 00:16:55.272 Test: blockdev write read invalid size ...passed 00:16:55.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:55.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:55.272 Test: blockdev write read max offset ...passed 00:16:55.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:55.272 Test: blockdev writev readv 8 blocks ...passed 00:16:55.272 Test: blockdev writev readv 30 x 1block ...passed 00:16:55.272 Test: blockdev writev readv block ...passed 00:16:55.272 Test: blockdev writev readv size > 128k ...passed 00:16:55.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:55.272 Test: blockdev comparev and writev ...passed 00:16:55.272 Test: blockdev nvme passthru rw ...passed 00:16:55.272 Test: blockdev nvme passthru vendor specific ...passed 00:16:55.272 Test: blockdev nvme admin passthru ...passed 00:16:55.272 Test: blockdev copy ...passed 00:16:55.272 00:16:55.273 Run Summary: Type Total Ran Passed Failed Inactive 00:16:55.273 suites 1 1 n/a 0 0 00:16:55.273 tests 23 23 23 0 0 00:16:55.273 asserts 130 130 130 0 n/a 00:16:55.273 00:16:55.273 Elapsed time = 0.317 seconds 00:16:55.273 0 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 101854 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 101854 ']' 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 101854 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101854 00:16:55.273 killing process with pid 101854 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101854' 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 101854 00:16:55.273 15:32:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 101854 00:16:55.844 ************************************ 00:16:55.844 END TEST bdev_bounds 00:16:55.844 ************************************ 00:16:55.844 15:32:54 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:55.844 00:16:55.844 real 0m1.586s 00:16:55.844 user 0m3.707s 00:16:55.844 sys 0m0.444s 00:16:55.844 15:32:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.844 15:32:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:55.844 15:32:54 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:55.844 15:32:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:55.844 15:32:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.844 15:32:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:55.844 ************************************ 00:16:55.844 START TEST bdev_nbd 00:16:55.844 ************************************ 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=101908 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 101908 /var/tmp/spdk-nbd.sock 00:16:55.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 101908 ']' 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.844 15:32:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:55.844 [2024-11-26 15:32:54.290447] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:55.844 [2024-11-26 15:32:54.290642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.104 [2024-11-26 15:32:54.432811] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:56.104 [2024-11-26 15:32:54.467846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.104 [2024-11-26 15:32:54.507851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:56.676 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.936 1+0 records in 00:16:56.936 1+0 records out 00:16:56.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410139 s, 10.0 MB/s 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:56.936 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:57.196 { 00:16:57.196 "nbd_device": "/dev/nbd0", 00:16:57.196 "bdev_name": "raid5f" 00:16:57.196 } 00:16:57.196 ]' 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:57.196 { 00:16:57.196 "nbd_device": "/dev/nbd0", 00:16:57.196 "bdev_name": "raid5f" 00:16:57.196 } 00:16:57.196 ]' 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.196 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.456 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:57.717 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:57.717 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:57.717 15:32:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.717 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:57.977 /dev/nbd0 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:57.977 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.978 1+0 records in 00:16:57.978 1+0 records out 00:16:57.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440107 s, 9.3 MB/s 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:57.978 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:58.238 { 00:16:58.238 "nbd_device": "/dev/nbd0", 00:16:58.238 "bdev_name": "raid5f" 00:16:58.238 } 00:16:58.238 ]' 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:58.238 { 00:16:58.238 "nbd_device": "/dev/nbd0", 00:16:58.238 "bdev_name": "raid5f" 00:16:58.238 } 00:16:58.238 ]' 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:58.238 256+0 records in 00:16:58.238 256+0 records out 00:16:58.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138819 s, 75.5 MB/s 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:58.238 256+0 records in 00:16:58.238 256+0 records out 00:16:58.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288992 s, 36.3 MB/s 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.238 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.499 15:32:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:58.759 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:59.019 malloc_lvol_verify 00:16:59.019 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:59.019 4eb63eb0-4508-4585-9f2f-e453a2b29af3 00:16:59.279 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:59.279 9e0f4639-64bd-4d77-b24a-3f81b2131e24 00:16:59.279 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:59.538 /dev/nbd0 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:59.538 mke2fs 1.47.0 (5-Feb-2023) 00:16:59.538 Discarding device blocks: 0/4096 done 00:16:59.538 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:59.538 00:16:59.538 Allocating group tables: 0/1 done 00:16:59.538 Writing inode tables: 0/1 done 00:16:59.538 Creating journal (1024 blocks): done 00:16:59.538 Writing superblocks and filesystem accounting information: 0/1 done 00:16:59.538 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.538 15:32:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 101908 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 101908 ']' 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 101908 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101908 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.798 killing process with pid 101908 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101908' 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 101908 00:16:59.798 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 101908 00:17:00.369 15:32:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:00.369 00:17:00.369 real 0m4.402s 00:17:00.369 user 0m6.212s 00:17:00.369 sys 0m1.362s 00:17:00.369 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.369 15:32:58 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 ************************************ 00:17:00.369 END TEST bdev_nbd 00:17:00.369 ************************************ 00:17:00.369 15:32:58 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:00.369 15:32:58 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:00.369 15:32:58 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:00.369 15:32:58 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:00.369 15:32:58 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.369 15:32:58 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.369 15:32:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.369 ************************************ 00:17:00.369 START TEST bdev_fio 00:17:00.369 ************************************ 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:00.369 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:00.369 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:00.370 ************************************ 00:17:00.370 START TEST bdev_fio_rw_verify 00:17:00.370 ************************************ 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:00.370 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:00.630 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:00.630 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:00.630 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:00.630 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:00.630 15:32:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:00.630 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:00.630 fio-3.35 00:17:00.630 Starting 1 thread 00:17:12.854 00:17:12.854 job_raid5f: (groupid=0, jobs=1): err= 0: pid=102093: Tue Nov 26 15:33:09 2024 00:17:12.854 read: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(496MiB/10001msec) 00:17:12.854 slat (nsec): min=16801, max=95191, avg=18349.17, stdev=1682.73 00:17:12.854 clat (usec): min=11, max=302, avg=127.32, stdev=43.33 00:17:12.854 lat (usec): min=30, max=321, avg=145.67, stdev=43.55 00:17:12.854 clat percentiles (usec): 00:17:12.854 | 50.000th=[ 131], 99.000th=[ 208], 99.900th=[ 227], 99.990th=[ 258], 00:17:12.854 | 99.999th=[ 289] 00:17:12.854 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(514MiB/9879msec); 0 zone resets 00:17:12.854 slat (usec): min=7, max=251, avg=15.95, stdev= 3.62 00:17:12.854 clat (usec): min=57, max=1777, avg=289.67, stdev=42.52 00:17:12.854 lat (usec): min=71, max=2028, avg=305.62, stdev=43.79 00:17:12.854 clat percentiles (usec): 00:17:12.854 | 50.000th=[ 293], 99.000th=[ 363], 99.900th=[ 635], 99.990th=[ 1418], 00:17:12.854 | 99.999th=[ 1696] 00:17:12.854 bw ( KiB/s): min=50608, max=55184, per=98.98%, avg=52683.79, stdev=1576.80, samples=19 00:17:12.854 iops : min=12652, max=13796, avg=13170.95, stdev=394.20, samples=19 00:17:12.854 lat (usec) : 20=0.01%, 50=0.01%, 100=16.43%, 250=40.61%, 500=42.88% 00:17:12.854 lat (usec) : 750=0.04%, 1000=0.02% 00:17:12.854 lat (msec) : 2=0.02% 00:17:12.854 cpu : usr=99.03%, sys=0.30%, ctx=26, majf=0, minf=13448 00:17:12.854 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:12.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.854 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.854 issued rwts: total=126908,131461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.854 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:12.854 00:17:12.854 Run status group 0 (all jobs): 00:17:12.854 READ: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=496MiB (520MB), run=10001-10001msec 00:17:12.854 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=514MiB (538MB), run=9879-9879msec 00:17:12.854 ----------------------------------------------------- 00:17:12.854 Suppressions used: 00:17:12.854 count bytes template 00:17:12.854 1 7 /usr/src/fio/parse.c 00:17:12.854 610 58560 /usr/src/fio/iolog.c 00:17:12.854 1 8 libtcmalloc_minimal.so 00:17:12.854 1 904 libcrypto.so 00:17:12.854 ----------------------------------------------------- 00:17:12.854 00:17:12.854 00:17:12.854 real 0m11.393s 00:17:12.854 user 0m11.624s 00:17:12.854 sys 0m0.680s 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:12.854 ************************************ 00:17:12.854 END TEST bdev_fio_rw_verify 00:17:12.854 ************************************ 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:12.854 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f1ffcede-d8e6-49a8-8db3-b5395fdaa7e1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f1ffcede-d8e6-49a8-8db3-b5395fdaa7e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f1ffcede-d8e6-49a8-8db3-b5395fdaa7e1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ce1bd0c3-5e07-4161-adbd-6fa3bb915812",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6cd6f16c-db2e-45c5-9518-3d334298159a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0d4395c3-ba22-4a45-8f8d-b5c7e8383855",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.855 /home/vagrant/spdk_repo/spdk 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:12.855 00:17:12.855 real 0m11.690s 00:17:12.855 user 0m11.759s 00:17:12.855 sys 0m0.811s 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.855 15:33:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:12.855 ************************************ 00:17:12.855 END TEST bdev_fio 00:17:12.855 ************************************ 00:17:12.855 15:33:10 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:12.855 15:33:10 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:12.855 15:33:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:12.855 15:33:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.855 15:33:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:12.855 ************************************ 00:17:12.855 START TEST bdev_verify 00:17:12.855 ************************************ 00:17:12.855 15:33:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:12.855 [2024-11-26 15:33:10.520322] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:17:12.855 [2024-11-26 15:33:10.520465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102243 ] 00:17:12.855 [2024-11-26 15:33:10.662435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:12.855 [2024-11-26 15:33:10.700253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:12.855 [2024-11-26 15:33:10.741940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.855 [2024-11-26 15:33:10.742036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.855 Running I/O for 5 seconds... 00:17:14.736 11070.00 IOPS, 43.24 MiB/s [2024-11-26T15:33:14.155Z] 11185.00 IOPS, 43.69 MiB/s [2024-11-26T15:33:15.095Z] 11258.33 IOPS, 43.98 MiB/s [2024-11-26T15:33:16.035Z] 11242.50 IOPS, 43.92 MiB/s [2024-11-26T15:33:16.296Z] 11231.40 IOPS, 43.87 MiB/s 00:17:17.817 Latency(us) 00:17:17.817 [2024-11-26T15:33:16.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.817 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:17.817 Verification LBA range: start 0x0 length 0x2000 00:17:17.817 raid5f : 5.02 6690.25 26.13 0.00 0.00 28775.58 433.77 21135.12 00:17:17.817 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:17.817 Verification LBA range: start 0x2000 length 0x2000 00:17:17.817 raid5f : 5.02 4549.54 17.77 0.00 0.00 42368.53 185.65 30388.87 00:17:17.817 [2024-11-26T15:33:16.296Z] =================================================================================================================== 00:17:17.817 [2024-11-26T15:33:16.296Z] Total : 11239.79 43.91 0.00 0.00 34278.74 185.65 30388.87 00:17:18.078 00:17:18.078 real 0m5.971s 00:17:18.078 user 0m10.997s 00:17:18.078 sys 0m0.338s 00:17:18.078 15:33:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.078 15:33:16 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 ************************************ 00:17:18.078 END TEST bdev_verify 00:17:18.078 ************************************ 00:17:18.078 15:33:16 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:18.078 15:33:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:18.078 15:33:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.078 15:33:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.078 ************************************ 00:17:18.078 START TEST bdev_verify_big_io 00:17:18.078 ************************************ 00:17:18.078 15:33:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:18.338 [2024-11-26 15:33:16.552254] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:17:18.338 [2024-11-26 15:33:16.552391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102330 ] 00:17:18.338 [2024-11-26 15:33:16.687919] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:18.338 [2024-11-26 15:33:16.725558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:18.338 [2024-11-26 15:33:16.769978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.338 [2024-11-26 15:33:16.770069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.598 Running I/O for 5 seconds... 00:17:20.959 633.00 IOPS, 39.56 MiB/s [2024-11-26T15:33:20.376Z] 761.00 IOPS, 47.56 MiB/s [2024-11-26T15:33:21.310Z] 782.00 IOPS, 48.88 MiB/s [2024-11-26T15:33:22.245Z] 793.25 IOPS, 49.58 MiB/s [2024-11-26T15:33:22.505Z] 812.00 IOPS, 50.75 MiB/s 00:17:24.026 Latency(us) 00:17:24.026 [2024-11-26T15:33:22.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.026 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:24.026 Verification LBA range: start 0x0 length 0x200 00:17:24.026 raid5f : 5.22 462.43 28.90 0.00 0.00 6904740.82 315.96 301603.84 00:17:24.026 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:24.026 Verification LBA range: start 0x200 length 0x200 00:17:24.026 raid5f : 5.29 360.04 22.50 0.00 0.00 8802755.19 203.50 380203.62 00:17:24.026 [2024-11-26T15:33:22.505Z] =================================================================================================================== 00:17:24.026 [2024-11-26T15:33:22.505Z] Total : 822.47 51.40 0.00 0.00 7742100.10 203.50 380203.62 00:17:24.287 00:17:24.287 real 0m6.228s 00:17:24.287 user 0m11.530s 00:17:24.287 sys 0m0.324s 00:17:24.287 15:33:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.287 15:33:22 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.287 ************************************ 00:17:24.287 END TEST bdev_verify_big_io 00:17:24.287 ************************************ 00:17:24.287 15:33:22 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:24.287 15:33:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:24.287 15:33:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.287 15:33:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:24.547 ************************************ 00:17:24.547 START TEST bdev_write_zeroes 00:17:24.547 ************************************ 00:17:24.547 15:33:22 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:24.547 [2024-11-26 15:33:22.853313] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:17:24.547 [2024-11-26 15:33:22.853458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102416 ] 00:17:24.547 [2024-11-26 15:33:22.988201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:24.807 [2024-11-26 15:33:23.028645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.807 [2024-11-26 15:33:23.068847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.066 Running I/O for 1 seconds... 00:17:26.006 29751.00 IOPS, 116.21 MiB/s 00:17:26.006 Latency(us) 00:17:26.006 [2024-11-26T15:33:24.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.006 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:26.006 raid5f : 1.01 29726.10 116.12 0.00 0.00 4293.42 1435.19 5826.44 00:17:26.006 [2024-11-26T15:33:24.485Z] =================================================================================================================== 00:17:26.006 [2024-11-26T15:33:24.485Z] Total : 29726.10 116.12 0.00 0.00 4293.42 1435.19 5826.44 00:17:26.266 00:17:26.267 real 0m1.925s 00:17:26.267 user 0m1.511s 00:17:26.267 sys 0m0.302s 00:17:26.267 15:33:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.267 15:33:24 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:26.267 ************************************ 00:17:26.267 END TEST bdev_write_zeroes 00:17:26.267 ************************************ 00:17:26.527 15:33:24 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:26.527 15:33:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:26.527 15:33:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.527 15:33:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.527 ************************************ 00:17:26.527 START TEST bdev_json_nonenclosed 00:17:26.527 ************************************ 00:17:26.527 15:33:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:26.527 [2024-11-26 15:33:24.854750] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:17:26.527 [2024-11-26 15:33:24.854874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102458 ] 00:17:26.527 [2024-11-26 15:33:24.989653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:26.788 [2024-11-26 15:33:25.023135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.788 [2024-11-26 15:33:25.066653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.788 [2024-11-26 15:33:25.066758] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:26.788 [2024-11-26 15:33:25.066780] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:26.788 [2024-11-26 15:33:25.066791] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:26.788 00:17:26.788 real 0m0.407s 00:17:26.788 user 0m0.178s 00:17:26.788 sys 0m0.125s 00:17:26.788 15:33:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.788 15:33:25 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:26.788 ************************************ 00:17:26.788 END TEST bdev_json_nonenclosed 00:17:26.788 ************************************ 00:17:26.788 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:26.788 15:33:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:26.788 15:33:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.788 15:33:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.788 ************************************ 00:17:26.788 START TEST bdev_json_nonarray 00:17:26.788 ************************************ 00:17:26.788 15:33:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:27.049 [2024-11-26 15:33:25.335276] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:17:27.049 [2024-11-26 15:33:25.335398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102488 ] 00:17:27.049 [2024-11-26 15:33:25.473939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:27.049 [2024-11-26 15:33:25.513549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.310 [2024-11-26 15:33:25.556167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.310 [2024-11-26 15:33:25.556306] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:27.310 [2024-11-26 15:33:25.556329] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:27.310 [2024-11-26 15:33:25.556338] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:27.310 00:17:27.310 real 0m0.419s 00:17:27.310 user 0m0.176s 00:17:27.310 sys 0m0.138s 00:17:27.311 15:33:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.311 15:33:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:27.311 ************************************ 00:17:27.311 END TEST bdev_json_nonarray 00:17:27.311 ************************************ 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:27.311 15:33:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:27.311 00:17:27.311 real 0m36.365s 00:17:27.311 user 0m48.666s 00:17:27.311 sys 0m5.379s 00:17:27.311 15:33:25 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.311 15:33:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.311 ************************************ 00:17:27.311 END TEST blockdev_raid5f 00:17:27.311 ************************************ 00:17:27.571 15:33:25 -- spdk/autotest.sh@194 -- # uname -s 00:17:27.571 15:33:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:27.571 15:33:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:27.571 15:33:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:27.571 15:33:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:27.571 15:33:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:27.571 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.571 15:33:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:17:27.571 15:33:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:27.571 15:33:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:27.571 15:33:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:17:27.571 15:33:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:17:27.571 15:33:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:17:27.571 15:33:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:17:27.571 15:33:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.571 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:17:27.571 15:33:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:17:27.571 15:33:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:17:27.571 15:33:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:17:27.571 15:33:25 -- common/autotest_common.sh@10 -- # set +x 00:17:30.115 INFO: APP EXITING 00:17:30.115 INFO: killing all VMs 00:17:30.115 INFO: killing vhost app 00:17:30.115 INFO: EXIT DONE 00:17:30.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:30.375 Waiting for block devices as requested 00:17:30.375 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:30.635 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:31.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:31.589 Cleaning 00:17:31.589 Removing: /var/run/dpdk/spdk0/config 00:17:31.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:31.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:31.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:31.589 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:31.589 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:31.589 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:31.589 Removing: /dev/shm/spdk_tgt_trace.pid70717 00:17:31.589 Removing: /var/run/dpdk/spdk0 00:17:31.589 Removing: /var/run/dpdk/spdk_pid100533 00:17:31.589 Removing: /var/run/dpdk/spdk_pid100850 00:17:31.589 Removing: /var/run/dpdk/spdk_pid101521 00:17:31.589 Removing: /var/run/dpdk/spdk_pid101783 00:17:31.589 Removing: /var/run/dpdk/spdk_pid101827 00:17:31.589 Removing: /var/run/dpdk/spdk_pid101854 00:17:31.589 Removing: /var/run/dpdk/spdk_pid102078 00:17:31.589 Removing: /var/run/dpdk/spdk_pid102243 00:17:31.589 Removing: /var/run/dpdk/spdk_pid102330 00:17:31.590 Removing: /var/run/dpdk/spdk_pid102416 00:17:31.590 Removing: /var/run/dpdk/spdk_pid102458 00:17:31.590 Removing: /var/run/dpdk/spdk_pid102488 00:17:31.590 Removing: /var/run/dpdk/spdk_pid70553 00:17:31.590 Removing: /var/run/dpdk/spdk_pid70717 00:17:31.590 Removing: /var/run/dpdk/spdk_pid70924 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71010 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71040 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71146 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71164 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71352 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71431 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71505 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71605 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71691 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71725 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71759 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71832 00:17:31.590 Removing: /var/run/dpdk/spdk_pid71944 00:17:31.590 Removing: /var/run/dpdk/spdk_pid72371 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72424 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72476 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72493 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72564 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72580 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72649 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72665 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72707 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72725 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72767 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72787 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72925 00:17:31.850 Removing: /var/run/dpdk/spdk_pid72956 00:17:31.850 Removing: /var/run/dpdk/spdk_pid73045 00:17:31.850 Removing: /var/run/dpdk/spdk_pid74200 00:17:31.850 Removing: /var/run/dpdk/spdk_pid74395 00:17:31.850 Removing: /var/run/dpdk/spdk_pid74524 00:17:31.850 Removing: /var/run/dpdk/spdk_pid75134 00:17:31.850 Removing: /var/run/dpdk/spdk_pid75329 00:17:31.850 Removing: /var/run/dpdk/spdk_pid75458 00:17:31.850 Removing: /var/run/dpdk/spdk_pid76063 00:17:31.850 Removing: /var/run/dpdk/spdk_pid76382 00:17:31.850 Removing: /var/run/dpdk/spdk_pid76511 00:17:31.850 Removing: /var/run/dpdk/spdk_pid77841 00:17:31.850 Removing: /var/run/dpdk/spdk_pid78072 00:17:31.850 Removing: /var/run/dpdk/spdk_pid78207 00:17:31.850 Removing: /var/run/dpdk/spdk_pid79542 00:17:31.850 Removing: /var/run/dpdk/spdk_pid79773 00:17:31.850 Removing: /var/run/dpdk/spdk_pid79907 00:17:31.850 Removing: /var/run/dpdk/spdk_pid81243 00:17:31.850 Removing: /var/run/dpdk/spdk_pid81672 00:17:31.850 Removing: /var/run/dpdk/spdk_pid81801 00:17:31.850 Removing: /var/run/dpdk/spdk_pid83226 00:17:31.850 Removing: /var/run/dpdk/spdk_pid83474 00:17:31.850 Removing: /var/run/dpdk/spdk_pid83603 00:17:31.850 Removing: /var/run/dpdk/spdk_pid85027 00:17:31.850 Removing: /var/run/dpdk/spdk_pid85270 00:17:31.850 Removing: /var/run/dpdk/spdk_pid85409 00:17:31.850 Removing: /var/run/dpdk/spdk_pid86830 00:17:31.850 Removing: /var/run/dpdk/spdk_pid87301 00:17:31.850 Removing: /var/run/dpdk/spdk_pid87430 00:17:31.850 Removing: /var/run/dpdk/spdk_pid87557 00:17:31.850 Removing: /var/run/dpdk/spdk_pid87963 00:17:31.850 Removing: /var/run/dpdk/spdk_pid88678 00:17:31.850 Removing: /var/run/dpdk/spdk_pid89037 00:17:31.850 Removing: /var/run/dpdk/spdk_pid89717 00:17:31.850 Removing: /var/run/dpdk/spdk_pid90135 00:17:31.850 Removing: /var/run/dpdk/spdk_pid90872 00:17:31.850 Removing: /var/run/dpdk/spdk_pid91261 00:17:31.850 Removing: /var/run/dpdk/spdk_pid93170 00:17:31.850 Removing: /var/run/dpdk/spdk_pid93597 00:17:31.850 Removing: /var/run/dpdk/spdk_pid94015 00:17:31.850 Removing: /var/run/dpdk/spdk_pid96051 00:17:31.850 Removing: /var/run/dpdk/spdk_pid96521 00:17:31.850 Removing: /var/run/dpdk/spdk_pid97026 00:17:32.111 Removing: /var/run/dpdk/spdk_pid98069 00:17:32.111 Removing: /var/run/dpdk/spdk_pid98386 00:17:32.111 Removing: /var/run/dpdk/spdk_pid99301 00:17:32.111 Removing: /var/run/dpdk/spdk_pid99618 00:17:32.111 Clean 00:17:32.111 15:33:30 -- common/autotest_common.sh@1453 -- # return 0 00:17:32.111 15:33:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:17:32.111 15:33:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.111 15:33:30 -- common/autotest_common.sh@10 -- # set +x 00:17:32.111 15:33:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:17:32.111 15:33:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.111 15:33:30 -- common/autotest_common.sh@10 -- # set +x 00:17:32.111 15:33:30 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:32.111 15:33:30 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:32.111 15:33:30 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:32.111 15:33:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:17:32.111 15:33:30 -- spdk/autotest.sh@398 -- # hostname 00:17:32.111 15:33:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:32.371 geninfo: WARNING: invalid characters removed from testname! 00:17:58.941 15:33:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:58.941 15:33:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:59.882 15:33:58 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:01.792 15:34:00 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:03.703 15:34:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:05.616 15:34:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:08.193 15:34:06 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:08.193 15:34:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:18:08.193 15:34:06 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:18:08.193 15:34:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:08.193 15:34:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:08.193 15:34:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:08.193 + [[ -n 6162 ]] 00:18:08.193 + sudo kill 6162 00:18:08.226 [Pipeline] } 00:18:08.245 [Pipeline] // timeout 00:18:08.253 [Pipeline] } 00:18:08.270 [Pipeline] // stage 00:18:08.278 [Pipeline] } 00:18:08.292 [Pipeline] // catchError 00:18:08.302 [Pipeline] stage 00:18:08.304 [Pipeline] { (Stop VM) 00:18:08.317 [Pipeline] sh 00:18:08.602 + vagrant halt 00:18:11.144 ==> default: Halting domain... 00:18:19.287 [Pipeline] sh 00:18:19.568 + vagrant destroy -f 00:18:22.113 ==> default: Removing domain... 00:18:22.127 [Pipeline] sh 00:18:22.412 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:22.423 [Pipeline] } 00:18:22.438 [Pipeline] // stage 00:18:22.443 [Pipeline] } 00:18:22.458 [Pipeline] // dir 00:18:22.464 [Pipeline] } 00:18:22.478 [Pipeline] // wrap 00:18:22.484 [Pipeline] } 00:18:22.497 [Pipeline] // catchError 00:18:22.506 [Pipeline] stage 00:18:22.508 [Pipeline] { (Epilogue) 00:18:22.522 [Pipeline] sh 00:18:22.808 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:27.025 [Pipeline] catchError 00:18:27.027 [Pipeline] { 00:18:27.044 [Pipeline] sh 00:18:27.334 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:27.334 Artifacts sizes are good 00:18:27.344 [Pipeline] } 00:18:27.358 [Pipeline] // catchError 00:18:27.370 [Pipeline] archiveArtifacts 00:18:27.380 Archiving artifacts 00:18:27.497 [Pipeline] cleanWs 00:18:27.512 [WS-CLEANUP] Deleting project workspace... 00:18:27.512 [WS-CLEANUP] Deferred wipeout is used... 00:18:27.529 [WS-CLEANUP] done 00:18:27.531 [Pipeline] } 00:18:27.548 [Pipeline] // stage 00:18:27.556 [Pipeline] } 00:18:27.572 [Pipeline] // node 00:18:27.578 [Pipeline] End of Pipeline 00:18:27.638 Finished: SUCCESS